From d12884e4d365314297b9e446b23d9e846d23e804 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 16:58:49 +0100 Subject: [PATCH 01/46] Extract common logic from ExecuteQuery, ExecuteMutation and ExecuteSubscriptionEvent --- spec/Section 6 -- Execution.md | 44 +++++++++++++++++++++------------- 1 file changed, 27 insertions(+), 17 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 28862ea89..d70247f9f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -131,12 +131,8 @@ ExecuteQuery(query, schema, variableValues, initialValue): - Let {queryType} be the root Query type in {schema}. - Assert: {queryType} is an Object type. - Let {selectionSet} be the top level Selection Set in {query}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - queryType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, queryType, + selectionSet)}. ### Mutation @@ -153,11 +149,8 @@ ExecuteMutation(mutation, schema, variableValues, initialValue): - Let {mutationType} be the root Mutation type in {schema}. - Assert: {mutationType} is an Object type. - Let {selectionSet} be the top level Selection Set in {mutation}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - mutationType, initialValue, variableValues)} _serially_. -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, mutationType, + selectionSet, true)}. ### Subscription @@ -301,12 +294,8 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): - Let {subscriptionType} be the root Subscription type in {schema}. - Assert: {subscriptionType} is an Object type. - Let {selectionSet} be the top level Selection Set in {subscription}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, - subscriptionType, initialValue, variableValues)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. +- Return {ExecuteRootSelectionSet(variableValues, initialValue, + subscriptionType, selectionSet)}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to {ExecuteQuery()} since this is how each event result is produced. @@ -322,6 +311,27 @@ Unsubscribe(responseStream): - Cancel {responseStream} +## Executing the Root Selection Set + +To execute the root selection set, the object value being evaluated and the +object type need to be known, as well as whether it must be executed serially, +or may be executed in parallel. + +Executing the root selection set works similarly for queries (parallel), +mutations (serial), and subscriptions (where it is executed for each event in +the underlying Source Stream). + +ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, +serial): + +- If {serial} is not provided, initialize it to {false}. +- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, + objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, + _normally_ (allowing parallelization) otherwise. +- Let {errors} be the list of all _field error_ raised while executing the + selection set. +- Return an unordered map containing {data} and {errors}. + ## Executing Selection Sets To execute a selection set, the object value being evaluated and the object type From 72d5447a8e2deef0e16f71329276f8372c43c126 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Fri, 28 Apr 2023 17:20:43 +0100 Subject: [PATCH 02/46] Change ExecuteSelectionSet to ExecuteGroupedFieldSet --- spec/Section 6 -- Execution.md | 53 ++++++++++++++++++++-------------- 1 file changed, 31 insertions(+), 22 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index d70247f9f..312a1d3f3 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -321,31 +321,34 @@ Executing the root selection set works similarly for queries (parallel), mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). +First, the selection set is turned into a grouped field set; then, we execute +this grouped field set and return the resulting {data} and {errors}. + ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {data} be the result of running {ExecuteSelectionSet(selectionSet, +- Let {groupedFieldSet} be the result of {CollectFields(objectType, + selectionSet, variableValues)}. +- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - Let {errors} be the list of all _field error_ raised while executing the selection set. - Return an unordered map containing {data} and {errors}. -## Executing Selection Sets +## Executing a Grouped Field Set -To execute a selection set, the object value being evaluated and the object type -need to be known, as well as whether it must be executed serially, or may be -executed in parallel. +To execute a grouped field set, the object value being evaluated and the object +type need to be known, as well as whether it must be executed serially, or may +be executed in parallel. -First, the selection set is turned into a grouped field set; then, each -represented field in the grouped field set produces an entry into a response -map. +Each represented field in the grouped field set produces an entry into a +response map. -ExecuteSelectionSet(selectionSet, objectType, objectValue, variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues): -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. - Initialize {resultMap} to an empty ordered map. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value @@ -363,8 +366,8 @@ is explained in greater detail in the Field Collection section below. **Errors and Non-Null Fields** -If during {ExecuteSelectionSet()} a field with a non-null {fieldType} raises a -_field error_ then that error must propagate to this entire selection set, +If during {ExecuteGroupedFieldSet()} a field with a non-null {fieldType} raises +a _field error_ then that error must propagate to this entire selection set, either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet @@ -702,8 +705,9 @@ CompleteValue(fieldType, fields, result, variableValues): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {subSelectionSet} be the result of calling {MergeSelectionSets(fields)}. - - Return the result of evaluating {ExecuteSelectionSet(subSelectionSet, + - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, + fields, variableValues)}. + - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, objectType, result, variableValues)} _normally_ (allowing for parallelization). @@ -750,9 +754,9 @@ ResolveAbstractType(abstractType, objectValue): **Merging Selection Sets** -When more than one field of the same name is executed in parallel, their -selection sets are merged together when completing the value in order to -continue execution of the sub-selection sets. +When more than one field of the same name is executed in parallel, during value +completion their selection sets are collected together to produce a single +grouped field set in order to continue execution of the sub-selection sets. An example operation illustrating parallel fields with the same name with sub-selections. @@ -771,14 +775,19 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -MergeSelectionSets(fields): +CollectSubfields(objectType, fields, variableValues): -- Let {selectionSet} be an empty list. +- Let {groupedFieldSet} be an empty map. - For each {field} in {fields}: - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Append all selections in {fieldSelectionSet} to {selectionSet}. -- Return {selectionSet}. + - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, + fieldSelectionSet, variableValues)}. + - For each {subGroupedFieldSet} as {responseKey} and {subfields}: + - Let {groupForResponseKey} be the list in {groupedFieldSet} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all fields in {subfields} to {groupForResponseKey}. +- Return {groupedFieldSet}. ### Handling Field Errors From 4d62b8b580f079e54cee1ef027a952547c8e6e13 Mon Sep 17 00:00:00 2001 From: Benjie Gillam Date: Mon, 21 Aug 2023 12:15:34 +0100 Subject: [PATCH 03/46] Correct reference to MergeSelectionSets --- spec/Section 5 -- Validation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index dceec126b..4b1df58fa 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -463,7 +463,7 @@ unambiguous. Therefore any two field selections which might both be encountered for the same object are only valid if they are equivalent. During execution, the simultaneous execution of fields with the same response -name is accomplished by {MergeSelectionSets()} and {CollectFields()}. +name is accomplished by {CollectSubfields()}. For simple hand-written GraphQL, this rule is obviously a clear developer error, however nested fragments can make this difficult to detect manually. From 8fd0df3e4cf0d275aca41d7007435f8f7f833582 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 6 Dec 2023 15:52:20 +0200 Subject: [PATCH 04/46] move Field Collection section earlier as it is used during ExecuteRootSelectionSet --- spec/Section 6 -- Execution.md | 214 ++++++++++++++++----------------- 1 file changed, 107 insertions(+), 107 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 312a1d3f3..636e01e5f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -337,6 +337,112 @@ serial): selection set. - Return an unordered map containing {data} and {errors}. +### Field Collection + +Before execution, the selection set is converted to a grouped field set by +calling {CollectFields()}. Each entry in the grouped field set is a list of +fields that share a response key (the alias if defined, otherwise the field +name). This ensures all fields with the same response key (including those in +referenced fragments) are executed at the same time. + +As an example, collecting the fields of this selection set would collect two +instances of the field `a` and one of field `b`: + +```graphql example +{ + a { + subfield1 + } + ...ExampleFragment +} + +fragment ExampleFragment on Query { + a { + subfield2 + } + b +} +``` + +The depth-first-search order of the field groups produced by {CollectFields()} +is maintained through execution, ensuring that fields appear in the executed +response in a stable and predictable order. + +CollectFields(objectType, selectionSet, variableValues, visitedFragments): + +- If {visitedFragments} is not provided, initialize it to the empty set. +- Initialize {groupedFields} to an empty ordered map of lists. +- For each {selection} in {selectionSet}: + - If {selection} provides the directive `@skip`, let {skipDirective} be that + directive. + - If {skipDirective}'s {if} argument is {true} or is a variable in + {variableValues} with the value {true}, continue with the next {selection} + in {selectionSet}. + - If {selection} provides the directive `@include`, let {includeDirective} be + that directive. + - If {includeDirective}'s {if} argument is not {true} and is not a variable + in {variableValues} with the value {true}, continue with the next + {selection} in {selectionSet}. + - If {selection} is a {Field}: + - Let {responseKey} be the response key of {selection} (the alias if + defined, otherwise the field name). + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append {selection} to the {groupForResponseKey}. + - If {selection} is a {FragmentSpread}: + - Let {fragmentSpreadName} be the name of {selection}. + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. + - Let {fragment} be the Fragment in the current Document whose name is + {fragmentSpreadName}. + - If no such {fragment} exists, continue with the next {selection} in + {selectionSet}. + - Let {fragmentType} be the type condition on {fragment}. + - If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue + with the next {selection} in {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. + - If {selection} is an {InlineFragment}: + - Let {fragmentType} be the type condition on {selection}. + - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, + fragmentType)} is false, continue with the next {selection} in + {selectionSet}. + - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - Let {fragmentGroupedFieldSet} be the result of calling + {CollectFields(objectType, fragmentSelectionSet, variableValues, + visitedFragments)}. + - For each {fragmentGroup} in {fragmentGroupedFieldSet}: + - Let {responseKey} be the response key shared by all fields in + {fragmentGroup}. + - Let {groupForResponseKey} be the list in {groupedFields} for + {responseKey}; if no such list exists, create it as an empty list. + - Append all items in {fragmentGroup} to {groupForResponseKey}. +- Return {groupedFields}. + +DoesFragmentTypeApply(objectType, fragmentType): + +- If {fragmentType} is an Object Type: + - if {objectType} and {fragmentType} are the same type, return {true}, + otherwise return {false}. +- If {fragmentType} is an Interface Type: + - if {objectType} is an implementation of {fragmentType}, return {true} + otherwise return {false}. +- If {fragmentType} is a Union: + - if {objectType} is a possible type of {fragmentType}, return {true} + otherwise return {false}. + +Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` +directives may be applied in either order since they apply commutatively. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -362,7 +468,7 @@ variableValues): - Return {resultMap}. Note: {resultMap} is ordered by which fields appear first in the operation. This -is explained in greater detail in the Field Collection section below. +is explained in greater detail in the Field Collection section above. **Errors and Non-Null Fields** @@ -472,112 +578,6 @@ A correct executor must generate the following result for that selection set: } ``` -### Field Collection - -Before execution, the selection set is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. - -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: - -```graphql example -{ - a { - subfield1 - } - ...ExampleFragment -} - -fragment ExampleFragment on Query { - a { - subfield2 - } - b -} -``` - -The depth-first-search order of the field groups produced by {CollectFields()} -is maintained through execution, ensuring that fields appear in the executed -response in a stable and predictable order. - -CollectFields(objectType, selectionSet, variableValues, visitedFragments): - -- If {visitedFragments} is not provided, initialize it to the empty set. -- Initialize {groupedFields} to an empty ordered map of lists. -- For each {selection} in {selectionSet}: - - If {selection} provides the directive `@skip`, let {skipDirective} be that - directive. - - If {skipDirective}'s {if} argument is {true} or is a variable in - {variableValues} with the value {true}, continue with the next {selection} - in {selectionSet}. - - If {selection} provides the directive `@include`, let {includeDirective} be - that directive. - - If {includeDirective}'s {if} argument is not {true} and is not a variable - in {variableValues} with the value {true}, continue with the next - {selection} in {selectionSet}. - - If {selection} is a {Field}: - - Let {responseKey} be the response key of {selection} (the alias if - defined, otherwise the field name). - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. - - If {selection} is a {FragmentSpread}: - - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. - - Let {fragment} be the Fragment in the current Document whose name is - {fragmentSpreadName}. - - If no such {fragment} exists, continue with the next {selection} in - {selectionSet}. - - Let {fragmentType} be the type condition on {fragment}. - - If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue - with the next {selection} in {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. - - If {selection} is an {InlineFragment}: - - Let {fragmentType} be the type condition on {selection}. - - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, - fragmentType)} is false, continue with the next {selection} in - {selectionSet}. - - Let {fragmentSelectionSet} be the top-level selection set of {selection}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. - - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - - Let {responseKey} be the response key shared by all fields in - {fragmentGroup}. - - Let {groupForResponseKey} be the list in {groupedFields} for - {responseKey}; if no such list exists, create it as an empty list. - - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. - -DoesFragmentTypeApply(objectType, fragmentType): - -- If {fragmentType} is an Object Type: - - if {objectType} and {fragmentType} are the same type, return {true}, - otherwise return {false}. -- If {fragmentType} is an Interface Type: - - if {objectType} is an implementation of {fragmentType}, return {true} - otherwise return {false}. -- If {fragmentType} is a Union: - - if {objectType} is a possible type of {fragmentType}, return {true} - otherwise return {false}. - -Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` -directives may be applied in either order since they apply commutatively. - ## Executing Fields Each field requested in the grouped field set that is defined on the selected From f5e26e3a07d8387bd7a531568c1ab5918fe46374 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 7 Dec 2023 12:51:56 +0200 Subject: [PATCH 05/46] enhance(ResolveFieldValue): add async collection language and some baseline collection language for comparison extracted from #742 Authored-by: Rob Richard --- spec/Section 6 -- Execution.md | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 636e01e5f..18e5237f5 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -664,6 +664,12 @@ As an example, this might accept the {objectType} `Person`, the {field} {"soulMate"}, and the {objectValue} representing John Lennon. It would be expected to yield the value representing Yoko Ono. +List values are resolved similarly. For example, {ResolveFieldValue} might also +accept the {objectType} `MusicBand`, the {field} {"members"}, and the +{objectValue} representing the Beatles. It would be expected to yield a +collection of values representing John Lennon, Paul McCartney, Ringo Starr and +George Harrison. + ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): - Let {resolver} be the internal function provided by {objectType} for @@ -674,7 +680,8 @@ ResolveFieldValue(objectType, objectValue, fieldName, argumentValues): Note: It is common for {resolver} to be asynchronous due to relying on reading an underlying database or networked service to produce a value. This necessitates the rest of a GraphQL executor to handle an asynchronous execution -flow. +flow. In addition, an implementation for collections may leverage asynchronous +iterators or asynchronous generators provided by many programming languages. ### Value Completion From 648bf344a2783c3a0a4a698076c4da18301a3f4b Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Thu, 18 Aug 2022 17:21:00 -0400 Subject: [PATCH 06/46] Introduce @stream. Authored-by: Rob Richard Co-authored-by: Benjie Gillam Co-authored-by: Yaacov Rydzinski --- cspell.yml | 1 + spec/Section 3 -- Type System.md | 57 +++++ spec/Section 5 -- Validation.md | 163 +++++++++++++ spec/Section 6 -- Execution.md | 400 +++++++++++++++++++++++++++++-- spec/Section 7 -- Response.md | 179 ++++++++++++-- 5 files changed, 764 insertions(+), 36 deletions(-) diff --git a/cspell.yml b/cspell.yml index e8aa73355..8bc4a231c 100644 --- a/cspell.yml +++ b/cspell.yml @@ -4,6 +4,7 @@ ignoreRegExpList: - /[a-z]{2,}'s/ words: # Terms of art + - deprioritization - endianness - interoperation - monospace diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index bb0d50b35..6f121ba44 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -1941,6 +1941,11 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. +GraphQL implementations are not required to implement the `@stream` directive. +If the directive is implemented, it must be implemented according to this +specification. GraphQL implementations that do not support the `@stream` +directive must not make it available via introspection. + GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the schema. @@ -2161,3 +2166,55 @@ to the relevant IETF specification. ```graphql example scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ``` + +### @stream + +```graphql +directive @stream( + label: String + if: Boolean! = true + initialCount: Int = 0 +) on FIELD +``` + +The `@stream` directive may be provided for a field of `List` type so that the +backend can leverage technology such as asynchronous iterators to provide a +partial list in the initial response, and additional list items in subsequent +responses. `@include` and `@skip` take precedence over `@stream`. + +```graphql example +query myQuery($shouldStream: Boolean) { + user { + friends(first: 10) { + nodes @stream(label: "friendsStream", initialCount: 5, if: $shouldStream) + } + } +} +``` + +#### @stream Arguments + +- `if: Boolean! = true` - When `true`, field _should_ be streamed (See + [related note](#note-088b7)). When `false`, the field will not be streamed and + all list items will be included in the initial response. Defaults to `true` + when omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding stream directive. If + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@stream` directives in a document. + `label` must not be provided as a variable. +- `initialCount: Int` - The number of list items the service should return as + part of the initial response. If omitted, defaults to `0`. A field error will + be raised if the value of this argument is less than `0`. + +Note: The ability to stream parts of a response can have a potentially +significant impact on application performance. Developers generally need clear, +predictable control over their application's performance. It is highly +recommended that GraphQL services honor the `@stream` directives on each +execution. However, the specification allows advanced use cases where the +service can determine that it is more performant to not stream. Therefore, +GraphQL clients _must_ be able to process a response that ignores the `@stream` +directive. This also applies to the `initialCount` argument on the `@stream` +directive. Clients _must_ be able to process a streamed response that contains a +different number of initial list items than what was specified in the +`initialCount` argument. diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index 4b1df58fa..fa5cffe3a 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -422,6 +422,7 @@ FieldsInSetCanMerge(set): {set} including visiting fragments and inline fragments. - Given each pair of members {fieldA} and {fieldB} in {fieldsForName}: - {SameResponseShape(fieldA, fieldB)} must be true. + - {SameStreamDirective(fieldA, fieldB)} must be true. - If the parent types of {fieldA} and {fieldB} are equal or if either is not an Object Type: - {fieldA} and {fieldB} must have identical field names. @@ -455,6 +456,16 @@ SameResponseShape(fieldA, fieldB): - If {SameResponseShape(subfieldA, subfieldB)} is false, return false. - Return true. +SameStreamDirective(fieldA, fieldB): + +- If neither {fieldA} nor {fieldB} has a directive named `stream`. + - Return true. +- If both {fieldA} and {fieldB} have a directive named `stream`. + - Let {streamA} be the directive named `stream` on {fieldA}. + - Let {streamB} be the directive named `stream` on {fieldB}. + - If {streamA} and {streamB} have identical sets of arguments, return true. +- Return false. + **Explanatory Text** If multiple field selections with the same response names are encountered during @@ -1517,6 +1528,158 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` +### Stream Directives Are Used On Valid Root Field + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- Let {mutationType} be the root Mutation type in {schema}. +- Let {subscriptionType} be the root Subscription type in {schema}. +- If {directiveName} is "stream": + - The parent type of {directive} must not be {mutationType} or + {subscriptionType}. + +**Explanatory Text** + +The stream directives is not allowed to be used on root fields of the mutation +or subscription type. + +For example, the following document will not pass validation because `@stream` +has been used on a root mutation field: + +```raw graphql counter-example +mutation { + mutationField @stream +} +``` + +### Stream Directives Are Used On Valid Operations + +**Formal Specification** + +- Let {subscriptionFragments} be the empty set. +- For each {operation} in a document: + - If {operation} is a subscription operation: + - Let {fragments} be every fragment referenced by that {operation} + transitively. + - For each {fragment} in {fragments}: + - Let {fragmentName} be the name of {fragment}. + - Add {fragmentName} to {subscriptionFragments}. +- For every {directive} in a document: + - If {directiveName} is not "stream": + - Continue to the next {directive}. + - Let {ancestor} be the ancestor operation or fragment definition of + {directive}. + - If {ancestor} is a fragment definition: + - If the fragment name of {ancestor} is not present in + {subscriptionFragments}: + - Continue to the next {directive}. + - If {ancestor} is not a subscription operation: + - Continue to the next {directive}. + - Let {if} be the argument named "if" on {directive}. + - {if} must be defined. + - Let {argumentValue} be the value passed to {if}. + - {argumentValue} must be a variable, or the boolean value "false". + +**Explanatory Text** + +The stream directives can not be used to stream data in subscription operations. +If these directives appear in a subscription operation they must be disabled +using the "if" argument. This rule will not permit any stream directives on a +subscription operation that cannot be disabled using the "if" argument. + +For example, the following document will not pass validation because `@stream` +has been used in a subscription operation with no "if" argument defined: + +```raw graphql counter-example +subscription sub { + newMessage @stream { + body + } +} +``` + +### Stream Directive Labels Are Unique + +**Formal Specification** + +- Let {labelValues} be an empty set. +- For every {directive} in the document: + - Let {directiveName} be the name of {directive}. + - If {directiveName} is "stream": + - For every {argument} in {directive}: + - Let {argumentName} be the name of {argument}. + - Let {argumentValue} be the value passed to {argument}. + - If {argumentName} is "label": + - {argumentValue} must not be a variable. + - {argumentValue} must not be present in {labelValues}. + - Append {argumentValue} to {labelValues}. + +**Explanatory Text** + +The `@stream` directive accepts an argument "label". This label may be used by +GraphQL clients to uniquely identify response payloads. If a label is passed, it +must not be a variable and it must be unique within all other `@stream` +directives in the document. + +For example the following document is valid: + +```graphql example +{ + pets @stream { + name + } + pets @stream(label: "petStream") { + owner { + name + } + } +} +``` + +For example, the following document will not pass validation because the same +label is used in different `@stream` directives.: + +```raw graphql counter-example +{ + pets @stream(label: "MyLabel") { + name + } + pets @stream(label: "MyLabel") { + owner { + name + } + } +} +``` + +### Stream Directives Are Used On List Fields + +**Formal Specification** + +- For every {directive} in a document. +- Let {directiveName} be the name of {directive}. +- If {directiveName} is "stream": + - Let {adjacent} be the AST node the directive affects. + - {adjacent} must be a List type. + +**Explanatory Text** + +GraphQL directive locations do not provide enough granularity to distinguish the +type of fields used in a GraphQL document. Since the stream directive is only +valid on list fields, an additional validation rule must be used to ensure it is +used correctly. + +For example, the following document will only pass validation if `field` is +defined as a List type in the associated schema. + +```graphql counter-example +query { + field @stream(initialCount: 0) +} +``` + ## Variables ### Variable Uniqueness diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 18e5237f5..85661e499 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -31,6 +31,10 @@ request is determined by the result of executing this operation according to the ExecuteRequest(schema, document, operationName, variableValues, initialValue): +Note: the execution assumes implementing language supports coroutines. +Alternatively, the socket can provide a write buffer pointer to allow +{ExecuteRequest()} to directly write payloads into the buffer. + - Let {operation} be the result of {GetOperation(document, operationName)}. - Let {coercedVariableValues} be the result of {CoerceVariableValues(schema, operation, variableValues)}. @@ -298,7 +302,9 @@ ExecuteSubscriptionEvent(subscription, schema, variableValues, initialValue): subscriptionType, selectionSet)}. Note: The {ExecuteSubscriptionEvent()} algorithm is intentionally similar to -{ExecuteQuery()} since this is how each event result is produced. +{ExecuteQuery()} since this is how each event result is produced. Incremental +delivery, however, is not supported within {ExecuteSubscriptionEvent()} and will +result in a _field error_. #### Unsubscribe @@ -324,18 +330,33 @@ the underlying Source Stream). First, the selection set is turned into a grouped field set; then, we execute this grouped field set and return the resulting {data} and {errors}. +If an operation contains `@stream` directives, execution may also result in an +Subsequent Result stream in addition to the initial response. The procedure for +yielding subsequent results is specified by the {YieldSubsequentResults()} +algorithm. + ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. - Let {groupedFieldSet} be the result of {CollectFields(objectType, selectionSet, variableValues)}. -- Let {data} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, initialValue, variableValues)} _serially_ if {serial} is {true}, - _normally_ (allowing parallelization) otherwise. +- Let {data} and {incrementalDigests} be the result of + {ExecuteGroupedFieldSet(groupedFieldSet, queryType, initialValue, + variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing + parallelization) otherwise. - Let {errors} be the list of all _field error_ raised while executing the - selection set. -- Return an unordered map containing {data} and {errors}. + {groupedFieldSet}. +- Let {newPendingResults} and {futures} be the results of + {ProcessIncrementalDigests(incrementalDigests)}. +- Let {ids} and {initialPayload} be the result of + {GetIncrementalPayload(newPendingResults)}. +- If {ids} is empty, return an empty unordered map consisting of {data} and + {errors}. +- Set the corresponding entries on {initialPayload} to {data} and {errors}. +- Let {subsequentResults} be the result of {YieldSubsequentResults(ids, + futures)}. +- Return {initialPayload} and {subsequentResults}. ### Field Collection @@ -443,6 +464,154 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +### Processing Incremental Digests + +An Incremental Digest is a structure containing: + +- {newPendingResults}: a list of new pending results to publish. +- {futures}: a list of future executions whose results will complete pending + results. The results of these future execution may immediately complete the + pending results, or may incrementally complete the pending results, and + contain additional Incremental Digests that will immediately or eventually + complete those results. + +ProcessIncrementalDigests(incrementalDigests): + +- Let {newPendingResults} and {futures} be lists containing all of the items + from the corresponding lists within each item of {incrementalDigests}. +- Return {newPendingResults} and {futures}. + +### Yielding Subsequent Results + +The procedure for yielding subsequent results is specified by the +{YieldSubsequentResults()} algorithm. First, any initiated future executions are +initiated. Then, any completed future executions are processed to determine the +payload to be yielded. Finally, if any pending results remain, the procedure is +repeated recursively. + +YieldSubsequentResults(originalIds, newFutures, initiatedFutures): + +- Initialize {futures} to a list containing all items in {initiatedFutures}. +- For each {future} in {newFutures}: + - If {future} has not been initiated, initiate it. + - Append {future} to {futures}. +- Wait for any future execution contained in {futures} to complete. +- Let {updates}, {newPendingResults}, {newestFutures}, and {remainingFutures} be + the result of {ProcessCompletedFutures(futures)}. +- Let {ids} and {payload} be the result of + {GetIncrementalPayload(newPendingResults, originalIds, updates)}. +- Yield {payload}. +- If {hasNext} on {payload} is {false}: + - Complete this subsequent result stream and return. +- Yield the results of {YieldSubsequentResults(ids, newestFutures, + remainingFutures)}. + +GetIncrementalPayload(newPendingResults, originalIds, updates): + +- Let {ids} be a new unordered map containing all of the entries in + {originalIds}. +- Initialize {pending}, {incremental}, and {completed} to empty lists. +- For each {newPendingResult} in {newPendingResults}: + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {id} be a unique identifier for this {newPendingResult}. + - Set the entry for {newPendingResult} in {ids} to {id}. + - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. + - Append {pendingEntry} to {pending}. +- For each {update} of {updates}: + - Let {completed}, {errors}, and {incremental} be the corresponding entries on + {update}. + - For each {completedResult} in {completed}: + - Let {id} be the entry for {completedResult} on {ids}. + - If {id} is not defined, continue to the next {completedResult} in + {completed}. + - Remove the entry on {ids} for {completedResult}. + - Let {completedEntry} be an unordered map containing {id}. + - If {errors} is defined, set the corresponding entry on {completedEntry} to + {errors}. + - Append {completedEntry} to {completed}. + - For each {incrementalResult} in {incremental}: + - Let {stream} be the corresponding entry on {incrementalResult}. + - Let {id} be the corresponding entry on {ids} for {stream}. + - If {id} is not defined, continue to the next {incrementalResult} in + {incremental}. + - Let {items} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {items}, and + {errors}. +- Let {hasNext} be {false} if {ids} is empty, otherwise {true}. +- Let {payload} be an unordered map containing {hasNext}. +- If {pending} is not empty: + - Set the corresponding entry on {payload} to {pending}. +- If {incremental} is not empty: + - Set the corresponding entry on {payload} to {incremental}. +- If {completed} is not empty: + - Set the corresponding entry on {payload} to {completed}. +- Return {ids} and {payload}. + +### Processing Completed Futures + +As future executions are completed, the {ProcessCompletedFutures()} algorithm +describes how the results of these executions impact the incremental state. +Results from completed futures are processed individually, with each result +possibly: + +- Completing existing pending results. +- Contributing data for the next payload. +- Containing additional Incremental Digests. + +When encountering additional Incremental Digests, {ProcessCompletedFutures()} +calls itself recursively, processing the new Incremental Digests and checking +for any completed futures, as long as the new Incremental Digests do not contain +any new pending results. If they do, first a new payload is yielded, notifying +the client that new pending results have been encountered. + +ProcessCompletedFutures(maybeCompletedFutures, updates, pending, +incrementalDigests, remainingFutures): + +- If {updates}, {pending}, {incrementalDigests}, or {remainingFutures} are not + provided, initialize them to empty lists. +- Let {completedFutures} be a list containing all completed futures from + {maybeCompletedFutures}; append the remaining futures to {remainingFutures}. +- Initialize {supplementalIncrementalDigests} to an empty list. +- For each {completedFuture} in {completedFutures}: + - Let {result} be the result of {completedFuture}. + - Let {update} and {resultIncrementalDigests} be the result of calling + {GetUpdatesForStreamItems(result)}. + - Append {update} to {updates}. + - For each {resultIncrementalDigest} in {resultIncrementalDigests}: + - If {resultIncrementalDigest} contains a {newPendingResults} entry: + - Append {resultIncrementalDigest} to {incrementalDigests}. + - Otherwise: + - Append {resultIncrementalDigest} to {supplementalIncrementalDigests}. +- If {supplementalIncrementalDigests} is empty: + - Let {newPendingResults} and {futures} be the result of + {ProcessIncrementalDigests(incrementalDigests)}. + - Append all items in {newPendingResults} to {pending}. + - Return {updates}, {pending}, {newFutures}, and {remainingFutures}. +- Let {newPendingResults} and {futures} be the result of + {ProcessIncrementalDigests(supplementalIncrementalDigests)}. +- Append all items in {newPendingResults} to {pending}. +- Return the result of {ProcessCompletedFutures(futures, updates, pending, + incrementalDigests, remainingFutures)}. + +GetUpdatesForStreamItems(streamItems): + +- Let {stream}, {items}, and {errors} be the corresponding entries on + {streamItems}. +- If {items} is not defined, the stream has asynchronously ended: + - Let {completed} be a list containing {stream}. + - Let {update} be an unordered map containing {completed}. +- Otherwise, if {items} is {null}: + - Let {completed} be a list containing {stream}. + - Let {errors} be the corresponding entry on {streamItems}. + - Let {update} be an unordered map containing {completed} and {errors}. +- Otherwise: + - Let {incremental} be a list containing {streamItems}. + - Let {update} be an unordered map containing {incremental}. + - Let {incrementalDigests} be the corresponding entry on {streamItems}. +- Let {updates} be a list containing {update}. +- Return {updates} and {incrementalDigests}. + ## Executing a Grouped Field Set To execute a grouped field set, the object value being evaluated and the object @@ -452,20 +621,25 @@ be executed in parallel. Each represented field in the grouped field set produces an entry into a response map. -ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues): +ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, +path): +- If {path} is not provided, initialize it to an empty list. - Initialize {resultMap} to an empty ordered map. +- Initialize {incrementalDigests} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} be {ExecuteField(objectType, objectValue, fieldType, - fields, variableValues)}. + - Let {responseValue} and {fieldIncrementalDigests} be the result of + {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, + path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. -- Return {resultMap}. + - Append all Incremental Digests in {fieldIncrementalDigests} to + {incrementalDigests}. +- Return {resultMap} and {incrementalDigests}. Note: {resultMap} is ordered by which fields appear first in the operation. This is explained in greater detail in the Field Collection section above. @@ -479,6 +653,31 @@ either resolving to {null} if allowed or further propagated to a parent field. If this occurs, any sibling fields which have not yet executed or have not yet yielded a value may be cancelled to avoid unnecessary work. +Additionally, Subsequent Result records must not be yielded if their path points +to a location that has resolved to {null} due to propagation of a field error. +If these subsequent results have not yet executed or have not yet yielded a +value they may also be cancelled to avoid unnecessary work. + +For example, assume the field `alwaysThrows` is a list of `Non-Null` type where +completion of the list item always raises a field error: + +```graphql example +{ + myObject(initialCount: 1) @stream { + alwaysThrows + } +} +``` + +In this case, only one response should be sent. Subsequent items from the stream +should be ignored and their completion, if initiated, may be cancelled. + +```json example +{ + "data": { "myObject": null } +} +``` + Note: See [Handling Field Errors](#sec-Handling-Field-Errors) for more about this behavior. @@ -578,6 +777,10 @@ A correct executor must generate the following result for that selection set: } ``` +When subsections contain a `@stream` directive, these subsections are no longer +required to execute serially. Execution of the streamed sections of the +subsection may be executed in parallel, as defined in {ExecuteStreamField}. + ## Executing Fields Each field requested in the grouped field set that is defined on the selected @@ -586,16 +789,17 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues): +ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path): - Let {field} be the first entry in {fields}. - Let {fieldName} be the field name of {field}. +- Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues)}. + variableValues, path)}. ### Coercing Field Arguments @@ -682,29 +886,74 @@ an underlying database or networked service to produce a value. This necessitates the rest of a GraphQL executor to handle an asynchronous execution flow. In addition, an implementation for collections may leverage asynchronous iterators or asynchronous generators provided by many programming languages. +This may be particularly helpful when used in conjunction with the `@stream` +directive. ### Value Completion After resolving the value for a field, it is completed by ensuring it adheres to the expected return type. If the return type is another Object type, then the -field execution process continues recursively. +field execution process continues recursively. If the return type is a List +type, each member of the resolved collection is completed using the same value +completion process. In the case where `@stream` is specified on a field of list +type, value completion iterates over the collection until the number of items +yielded items satisfies `initialCount` specified on the `@stream` directive. -CompleteValue(fieldType, fields, result, variableValues): +CompleteValue(fieldType, fields, result, variableValues, path): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} be the result of calling {CompleteValue(innerType, - fields, result, variableValues)}. + - Let {completedResult} and {incrementalDigests} be the result of calling + {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult}. + - Return {completedResult} and {incrementalDigests}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: + - Initialize {incrementalDigests} to an empty list. - If {result} is not a collection of values, raise a _field error_. + - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. - - Return a list where each list item is the result of calling - {CompleteValue(innerType, fields, resultItem, variableValues)}, where - {resultItem} is each item in {result}. + - If {field} provides the directive `@stream` and its {if} argument is not + {false} and is not a variable in {variableValues} with the value {false} and + {innerType} is the outermost return type of the list type defined for + {field}: + - Let {streamDirective} be that directive. + - If this execution is for a subscription operation, raise a _field error_. + - Let {initialCount} be the value or variable provided to + {streamDirective}'s {initialCount} argument. + - If {initialCount} is less than zero, raise a _field error_. + - Let {label} be the value or variable provided to {streamDirective}'s + {label} argument. + - Let {iterator} be an iterator for {result}. + - Let {items} be an empty list. + - Let {index} be zero. + - While {result} is not closed: + - If {streamDirective} is defined and {index} is greater than or equal to + {initialCount}: + - Let {stream} be an unordered map containing {path} and {label}. + - Let {future} represent the future execution of + {ExecuteStreamField(stream, iterator, streamFieldDetailsList, index, + innerType, variableValues)}. + - If early execution of streamed fields is desired: + - Following any implementation specific deferral of further execution, + initiate {future}. + - Let {incrementalDigest} be a new Incremental Digest created from + {stream} and {future}. + - Append {incrementalDigest} to {incrementalDigests}. + - Return {items} and {incrementalDigests}. + - Otherwise: + - Wait for the next item from {result} via the {iterator}. + - If an item is not retrieved because of an error, raise a _field error_. + - Let {item} be the item retrieved from {result}. + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem} and {itemIncrementalDigests} be the result of + calling {CompleteValue(innerType, fields, item, variableValues, + itemPath)}. + - Append {completedItem} to {items}. + - Append all Incremental Digests in {itemIncrementalDigests} to + {incrementalDigests}. + - Return {items} and {incrementalDigests}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -718,6 +967,35 @@ CompleteValue(fieldType, fields, result, variableValues): objectType, result, variableValues)} _normally_ (allowing for parallelization). +#### Execute Stream Field + +ExecuteStreamField(stream, iterator, fields, index, innerType, variableValues): + +- Let {path} be the corresponding entry on {stream}. +- Let {itemPath} be {path} with {index} appended. +- Wait for the next item from {iterator}. +- If {iterator} is closed, return. +- Let {item} be the next item retrieved via {iterator}. +- Let {nextIndex} be {index} plus one. +- Let {completedItem} and {itemIncrementalDigests} be the result of + {CompleteValue(innerType, fields, item, variableValues, itemPath)}. +- Initialize {items} to an empty list. +- Append {completedItem} to {items}. +- Let {errors} be the list of all _field error_ raised while completing the + item. +- Let {future} represent the future execution of {ExecuteStreamField(stream, + path, iterator, fields, nextIndex, innerType, variableValues)}. +- If early execution of streamed fields is desired: + - Following any implementation specific deferral of further execution, + initiate {future}. +- Let {incrementalDigest} be a new Incremental Digest created from {future}. +- Initialize {incrementalDigests} to a list containing {incrementalDigest}. +- Append all Incremental Digests in {itemIncrementalDigests} to + {incrementalDigests}. +- Let {streamedItems} be an unordered map containing {stream}, {items} {errors}, + and {incrementalDigests}. +- Return {streamedItem}. + **Coercing Results** The primary purpose of value completion is to ensure that the values returned by @@ -829,6 +1107,86 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. +When a field error is raised inside `ExecuteStreamField`, the stream payloads +act as error boundaries. That is, the null resulting from a `Non-Null` type +cannot propagate outside of the boundary of the stream payload. + +If the `stream` directive is present on a list field with a Non-Nullable inner +type, and a field error has caused a {null} to propagate to the list item, the +{null} similarly should not be sent to the client, as this will overwrite +existing data. In this case, the associated Stream's `completed` entry must +include the causative errors, whose presence indicated the failure of the stream +to complete successfully. For example, assume the `films` field is a `List` type +with an `Non-Null` inner type. In this case, the second list item raises a field +error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "pending": [{ "path": ["films"] }], + "hasNext": true +} +``` + +Response 2, the stream is completed with errors. Incremental data cannot be +sent, as this would overwrite previously sent values. + +```json example +{ + "completed": [ + { + "path": ["films"], + "errors": [...], + } + ], + "hasNext": false +} +``` + +In this alternative example, assume the `films` field is a `List` type without a +`Non-Null` inner type. In this case, the second list item also raises a field +error: + +```graphql example +{ + films @stream(initialCount: 1) +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "films": ["A New Hope"] }, + "hasNext": true +} +``` + +Response 2, the first stream payload is sent; the stream is not completed. The +{items} entry has been set to a list containing {null}, as this {null} has only +propagated as high as the list item. + +```json example +{ + "incremental": [ + { + "path": ["films", 1], + "items": [null], + "errors": [...], + } + ], + "hasNext": true +} +``` + If all fields from the root of the request to the source of the field error return `Non-Null` types, then the {"data"} entry in the response should be {null}. diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 8dcd9234c..112c7f6ff 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -10,7 +10,7 @@ the case that any _field error_ was raised on a field and was replaced with ## Response Format -A response to a GraphQL request must be a map. +A response to a GraphQL request must be a map or a response stream of maps. If the request raised any errors, the response map must contain an entry with key `errors`. The value of this entry is described in the "Errors" section. If @@ -22,14 +22,39 @@ key `data`. The value of this entry is described in the "Data" section. If the request failed before execution, due to a syntax error, missing information, or validation error, this entry must not be present. +When the response of the GraphQL operation is a response stream, the first value +will be the initial response. All subsequent values may contain an `incremental` +entry, containing a list of Stream payloads. + +The `label` and `path` entries on Stream payloads are used by clients to +identify the `@stream` directive from the GraphQL operation that triggered this +response to be included in an `incremental` entry on a value returned by the +response stream. When a label is provided, the combination of these two entries +will be unique across all Stream payloads returned in the response stream. + +If the response of the GraphQL operation is a response stream, each response map +must contain an entry with key `hasNext`. The value of this entry is `true` for +all but the last response in the stream. The value of this entry is `false` for +the last response of the stream. This entry must not be present for GraphQL +operations that return a single response map. + +The GraphQL service may determine there are no more values in the response +stream after a previous value with `hasNext` equal to `true` has been emitted. +In this case the last value in the response stream should be a map without +`data` and `incremental` entries, and a `hasNext` entry with a value of `false`. + The response map may also contain an entry with key `extensions`. This entry, if set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional -restrictions on its contents. +restrictions on its contents. When the response of the GraphQL operation is a +response stream, implementors may send subsequent response maps containing only +`hasNext` and `extensions` entries. Stream payloads may also contain an entry +with the key `extensions`, also reserved for implementors to extend the protocol +however they see fit. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the -three described above. +five described above. Note: When `errors` is present in the response, it may be helpful for it to appear first when serialized to make it more clear when errors are present in a @@ -107,14 +132,8 @@ syntax element. If an error can be associated to a particular field in the GraphQL result, it must contain an entry with the key `path` that details the path of the response field which experienced the error. This allows clients to identify whether a -`null` result is intentional or caused by a runtime error. - -This field should be a list of path segments starting at the root of the -response and ending with the field associated with the error. Path segments that -represent fields should be strings, and path segments that represent list -indices should be 0-indexed integers. If the error happens in an aliased field, -the path to the error should use the aliased name, since it represents a path in -the response, not in the request. +`null` result is intentional or caused by a runtime error. The value of this +field is described in the [Path](#sec-Path) section. For example, if fetching one of the friends' names fails in the following operation: @@ -244,6 +263,136 @@ discouraged. } ``` +### Incremental Delivery + +The `pending` entry in the response is a non-empty list of references to pending +Stream results. If the response of the GraphQL operation is a response stream, +this field should appear on the initial and possibly subsequent payloads. + +The `incremental` entry in the response is a non-empty list of data fulfilling +Stream results. If the response of the GraphQL operation is a response stream, +this field may appear on the subsequent payloads. + +The `completed` entry in the response is a non-empty list of references to +completed Stream results. + +For example: + +```graphql example +query { + person(id: "cGVvcGxlOjE=") { + name + films @stream(initialCount: 1, label: "filmsStream") { + title + } + } +} +``` + +The response stream might look like: + +Response 1, the initial response does not contain any streamed results. + +```json example +{ + "data": { + "person": { + "name": "Luke Skywalker", + "films": [{ "title": "A New Hope" }] + } + }, + "pending": [{ "path": ["person", "films"], "label": "filmStream" }], + "hasNext": true +} +``` + +Response 2, contains the first stream payload. + +```json example +{ + "incremental": [ + { + "path": ["person", "films"], + "items": [{ "title": "The Empire Strikes Back" }] + } + ], + "hasNext": true +} +``` + +Response 3, contains the final stream payload. In this example, the underlying +iterator does not close synchronously so {hasNext} is set to {true}. If this +iterator did close synchronously, {hasNext} would be set to {false} and this +would be the final response. + +```json example +{ + "incremental": [ + { + "path": ["person", "films"], + "items": [{ "title": "Return of the Jedi" }] + } + ], + "hasNext": true +} +``` + +Response 4, contains no incremental payloads. {hasNext} set to {false} indicates +the end of the response stream. This response is sent when the underlying +iterator of the `films` field closes. + +```json example +{ + "completed": [{ "path": ["person", "films"], "label": "filmStream" }], + "hasNext": false +} +``` + +#### Streamed data + +Streamed data may appear as an item in the `incremental` entry of a response. +Streamed data is the result of an associated `@stream` directive in the +operation. A stream payload must contain `items` and `path` entries and may +contain `errors`, and `extensions` entries. + +##### Items + +The `items` entry in a stream payload is a list of results from the execution of +the associated @stream directive. This output will be a list of the same type of +the field with the associated `@stream` directive. If an error has caused a +`null` to bubble up to a field higher than the list field with the associated +`@stream` directive, then the stream will complete with errors. + +#### Path + +A `path` field allows for the association to a particular field in a GraphQL +result. This field should be a list of path segments starting at the root of the +response and ending with the field to be associated with. Path segments that +represent fields should be strings, and path segments that represent list +indices should be 0-indexed integers. If the path is associated to an aliased +field, the path should use the aliased name, since it represents a path in the +response, not in the request. + +When the `path` field is present on a Stream payload, it indicates that the +`items` field represents the partial result of the list field containing the +corresponding `@stream` directive. All but the non-final path segments must +refer to the location of the list field containing the corresponding `@stream` +directive. The final segment of the path list must be a 0-indexed integer. This +integer indicates that this result is set at a range, where the beginning of the +range is at the index of this integer, and the length of the range is the length +of the data. + +When the `path` field is present on an "Error result", it indicates the response +field which experienced the error. + +#### Label + +Stream may contain a string field `label`. This `label` is the same label passed +to the `@stream` directive associated with the response. This allows clients to +identify which `@stream` directive is associated with this value. `label` will +not be present if the corresponding `@stream` directive is not passed a `label` +argument. + ## Serialization Format GraphQL does not require a specific serialization format. However, clients @@ -303,10 +452,10 @@ enables more efficient parsing of responses if the order of properties can be anticipated. Serialization formats which represent an ordered map should preserve the order -of requested fields as defined by {CollectFields()} in the Execution section. -Serialization formats which only represent unordered maps but where order is -still implicit in the serialization's textual order (such as JSON) should -preserve the order of requested fields textually. +of requested fields as defined by {AnalyzeSelectionSet()} in the Execution +section. Serialization formats which only represent unordered maps but where +order is still implicit in the serialization's textual order (such as JSON) +should preserve the order of requested fields textually. For example, if the request was `{ name, age }`, a GraphQL service responding in JSON should respond with `{ "name": "Mark", "age": 30 }` and should not respond From 3677a0969ade211b4f81b773a01008e4966c7b3d Mon Sep 17 00:00:00 2001 From: Rob Richard Date: Wed, 6 Dec 2023 17:43:55 +0200 Subject: [PATCH 07/46] Introduce @defer. Authored-by: Rob Richard Co-authored-by: Benjie Gillam Co-authored-by: Yaacov Rydzinski --- spec/Section 3 -- Type System.md | 83 +++- spec/Section 5 -- Validation.md | 80 ++-- spec/Section 6 -- Execution.md | 687 ++++++++++++++++++++++++++----- spec/Section 7 -- Response.md | 91 ++-- 4 files changed, 769 insertions(+), 172 deletions(-) diff --git a/spec/Section 3 -- Type System.md b/spec/Section 3 -- Type System.md index 6f121ba44..83d7dfcbd 100644 --- a/spec/Section 3 -- Type System.md +++ b/spec/Section 3 -- Type System.md @@ -794,8 +794,9 @@ And will yield the subset of each object type queried: When querying an Object, the resulting mapping of fields are conceptually ordered in the same order in which they were encountered during execution, excluding fragments for which the type does not apply and fields or fragments -that are skipped via `@skip` or `@include` directives. This ordering is -correctly produced when using the {CollectFields()} algorithm. +that are skipped via `@skip` or `@include` directives or temporarily skipped via +`@defer`. This ordering is correctly produced when using the {CollectFields()} +algorithm. Response serialization formats capable of representing ordered maps should maintain this ordering. Serialization formats which can only represent unordered @@ -1941,10 +1942,10 @@ by a validator, executor, or client tool such as a code generator. GraphQL implementations should provide the `@skip` and `@include` directives. -GraphQL implementations are not required to implement the `@stream` directive. -If the directive is implemented, it must be implemented according to this -specification. GraphQL implementations that do not support the `@stream` -directive must not make it available via introspection. +GraphQL implementations are not required to implement the `@defer` and `@stream` +directives. If either or both of these directives are implemented, they must be +implemented according to this specification. GraphQL implementations that do not +support these directives must not make them available via introspection. GraphQL implementations that support the type system definition language must provide the `@deprecated` directive if representing deprecated portions of the @@ -2167,6 +2168,50 @@ to the relevant IETF specification. scalar UUID @specifiedBy(url: "https://tools.ietf.org/html/rfc4122") ``` +### @defer + +```graphql +directive @defer( + label: String + if: Boolean! = true +) on FRAGMENT_SPREAD | INLINE_FRAGMENT +``` + +The `@defer` directive may be provided for fragment spreads and inline fragments +to inform the executor to delay the execution of the current fragment to +indicate deprioritization of the current fragment. A query with `@defer` +directive will cause the request to potentially return multiple responses, where +non-deferred data is delivered in the initial response and data deferred is +delivered in a subsequent response. `@include` and `@skip` take precedence over +`@defer`. + +```graphql example +query myQuery($shouldDefer: Boolean) { + user { + name + ...someFragment @defer(label: "someLabel", if: $shouldDefer) + } +} +fragment someFragment on User { + id + profile_picture { + uri + } +} +``` + +#### @defer Arguments + +- `if: Boolean! = true` - When `true`, fragment _should_ be deferred (See + [related note](#note-088b7)). When `false`, fragment will not be deferred and + data will be included in the initial response. Defaults to `true` when + omitted. +- `label: String` - May be used by GraphQL clients to identify the data from + responses and associate it with the corresponding defer directive. If + provided, the GraphQL service must add it to the corresponding payload. + `label` must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. + ### @stream ```graphql @@ -2201,20 +2246,20 @@ query myQuery($shouldStream: Boolean) { - `label: String` - May be used by GraphQL clients to identify the data from responses and associate it with the corresponding stream directive. If provided, the GraphQL service must add it to the corresponding payload. - `label` must be unique label across all `@stream` directives in a document. - `label` must not be provided as a variable. + `label` must be unique label across all `@defer` and `@stream` directives in a + document. `label` must not be provided as a variable. - `initialCount: Int` - The number of list items the service should return as part of the initial response. If omitted, defaults to `0`. A field error will be raised if the value of this argument is less than `0`. -Note: The ability to stream parts of a response can have a potentially -significant impact on application performance. Developers generally need clear, -predictable control over their application's performance. It is highly -recommended that GraphQL services honor the `@stream` directives on each -execution. However, the specification allows advanced use cases where the -service can determine that it is more performant to not stream. Therefore, -GraphQL clients _must_ be able to process a response that ignores the `@stream` -directive. This also applies to the `initialCount` argument on the `@stream` -directive. Clients _must_ be able to process a streamed response that contains a -different number of initial list items than what was specified in the -`initialCount` argument. +Note: The ability to defer and/or stream parts of a response can have a +potentially significant impact on application performance. Developers generally +need clear, predictable control over their application's performance. It is +highly recommended that GraphQL services honor the `@defer` and `@stream` +directives on each execution. However, the specification allows advanced use +cases where the service can determine that it is more performant to not defer +and/or stream. Therefore, GraphQL clients _must_ be able to process a response +that ignores the `@defer` and/or `@stream` directives. This also applies to the +`initialCount` argument on the `@stream` directive. Clients _must_ be able to +process a streamed response that contains a different number of initial list +items than what was specified in the `initialCount` argument. diff --git a/spec/Section 5 -- Validation.md b/spec/Section 5 -- Validation.md index fa5cffe3a..c99824c58 100644 --- a/spec/Section 5 -- Validation.md +++ b/spec/Section 5 -- Validation.md @@ -1528,7 +1528,7 @@ query ($foo: Boolean = true, $bar: Boolean = false) { } ``` -### Stream Directives Are Used On Valid Root Field +### Defer And Stream Directives Are Used On Valid Root Field **Formal Specification** @@ -1536,25 +1536,27 @@ query ($foo: Boolean = true, $bar: Boolean = false) { - Let {directiveName} be the name of {directive}. - Let {mutationType} be the root Mutation type in {schema}. - Let {subscriptionType} be the root Subscription type in {schema}. -- If {directiveName} is "stream": +- If {directiveName} is "defer" or "stream": - The parent type of {directive} must not be {mutationType} or {subscriptionType}. **Explanatory Text** -The stream directives is not allowed to be used on root fields of the mutation -or subscription type. +The defer and stream directives are not allowed to be used on root fields of the +mutation or subscription type. -For example, the following document will not pass validation because `@stream` +For example, the following document will not pass validation because `@defer` has been used on a root mutation field: ```raw graphql counter-example mutation { - mutationField @stream + ... @defer { + mutationField + } } ``` -### Stream Directives Are Used On Valid Operations +### Defer And Stream Directives Are Used On Valid Operations **Formal Specification** @@ -1567,7 +1569,7 @@ mutation { - Let {fragmentName} be the name of {fragment}. - Add {fragmentName} to {subscriptionFragments}. - For every {directive} in a document: - - If {directiveName} is not "stream": + - If {directiveName} is not "defer" or "stream": - Continue to the next {directive}. - Let {ancestor} be the ancestor operation or fragment definition of {directive}. @@ -1584,30 +1586,33 @@ mutation { **Explanatory Text** -The stream directives can not be used to stream data in subscription operations. -If these directives appear in a subscription operation they must be disabled -using the "if" argument. This rule will not permit any stream directives on a -subscription operation that cannot be disabled using the "if" argument. +The defer and stream directives can not be used to defer or stream data in +subscription operations. If these directives appear in a subscription operation +they must be disabled using the "if" argument. This rule will not permit any +defer or stream directives on a subscription operation that cannot be disabled +using the "if" argument. -For example, the following document will not pass validation because `@stream` +For example, the following document will not pass validation because `@defer` has been used in a subscription operation with no "if" argument defined: ```raw graphql counter-example subscription sub { - newMessage @stream { - body + newMessage { + ... @defer { + body + } } } ``` -### Stream Directive Labels Are Unique +### Defer And Stream Directive Labels Are Unique **Formal Specification** - Let {labelValues} be an empty set. - For every {directive} in the document: - Let {directiveName} be the name of {directive}. - - If {directiveName} is "stream": + - If {directiveName} is "defer" or "stream": - For every {argument} in {directive}: - Let {argumentName} be the name of {argument}. - Let {argumentValue} be the value passed to {argument}. @@ -1618,40 +1623,51 @@ subscription sub { **Explanatory Text** -The `@stream` directive accepts an argument "label". This label may be used by -GraphQL clients to uniquely identify response payloads. If a label is passed, it -must not be a variable and it must be unique within all other `@stream` -directives in the document. +The `@defer` and `@stream` directives each accept an argument "label". This +label may be used by GraphQL clients to uniquely identify response payloads. If +a label is passed, it must not be a variable and it must be unique within all +other `@defer` and `@stream` directives in the document. For example the following document is valid: ```graphql example { - pets @stream { - name + dog { + ...fragmentOne + ...fragmentTwo @defer(label: "dogDefer") } pets @stream(label: "petStream") { - owner { - name - } + name + } +} + +fragment fragmentOne on Dog { + name +} + +fragment fragmentTwo on Dog { + owner { + name } } ``` For example, the following document will not pass validation because the same -label is used in different `@stream` directives.: +label is used in different `@defer` and `@stream` directives.: ```raw graphql counter-example { - pets @stream(label: "MyLabel") { - name + dog { + ...fragmentOne @defer(label: "MyLabel") } pets @stream(label: "MyLabel") { - owner { - name - } + name } } + +fragment fragmentOne on Dog { + name +} ``` ### Stream Directives Are Used On List Fields diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 85661e499..f91632886 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -256,10 +256,11 @@ CreateSourceEventStream(subscription, schema, variableValues, initialValue): - Let {groupedFieldSet} be the result of {CollectFields(subscriptionType, selectionSet, variableValues)}. - If {groupedFieldSet} does not have exactly one entry, raise a _request error_. -- Let {fields} be the value of the first entry in {groupedFieldSet}. -- Let {fieldName} be the name of the first entry in {fields}. Note: This value - is unaffected if an alias is used. -- Let {field} be the first entry in {fields}. +- Let {fieldDetailsList} be the value of the first entry in {groupedFieldSet}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the name of {node}. Note: This value is unaffected if an + alias is used. - Let {argumentValues} be the result of {CoerceArgumentValues(subscriptionType, field, variableValues)} - Let {fieldStream} be the result of running @@ -327,27 +328,27 @@ Executing the root selection set works similarly for queries (parallel), mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). -First, the selection set is turned into a grouped field set; then, we execute -this grouped field set and return the resulting {data} and {errors}. +First, the selection set is turned into a field plan; then, we execute this +field plan and return the resulting {data} and {errors}. -If an operation contains `@stream` directives, execution may also result in an -Subsequent Result stream in addition to the initial response. The procedure for -yielding subsequent results is specified by the {YieldSubsequentResults()} -algorithm. +If an operation contains `@defer` or `@stream` directives, execution may also +result in an Subsequent Result stream in addition to the initial response. The +procedure for yielding subsequent results is specified by the +{YieldSubsequentResults()} algorithm. ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): - If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} be the result of {CollectFields(objectType, - selectionSet, variableValues)}. +- Let {groupedFieldSet} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. - Let {data} and {incrementalDigests} be the result of - {ExecuteGroupedFieldSet(groupedFieldSet, queryType, initialValue, - variableValues)} _serially_ if {serial} is {true}, _normally_ (allowing - parallelization) otherwise. + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. -- Let {newPendingResults} and {futures} be the results of +- Let {newPendingResults}, {futures}, and {deferStates} be the result of {ProcessIncrementalDigests(incrementalDigests)}. - Let {ids} and {initialPayload} be the result of {GetIncrementalPayload(newPendingResults)}. @@ -355,19 +356,35 @@ serial): {errors}. - Set the corresponding entries on {initialPayload} to {data} and {errors}. - Let {subsequentResults} be the result of {YieldSubsequentResults(ids, - futures)}. + deferStates, futures)}. - Return {initialPayload} and {subsequentResults}. ### Field Collection -Before execution, the selection set is converted to a grouped field set by -calling {CollectFields()}. Each entry in the grouped field set is a list of -fields that share a response key (the alias if defined, otherwise the field -name). This ensures all fields with the same response key (including those in -referenced fragments) are executed at the same time. +Before execution, selection set(s) are converted to a field plan via a two-step +process. In the first step, selections are converted into a grouped field set by +calling {CollectFields()}. Each entry in a grouped field set is a list of Field +Details records describing all fields that share a response key (the alias if +defined, otherwise the field name). This ensures all fields with the same +response key (including those in referenced fragments) are executed at the same +time. + +A Field Details record is a structure containing: + +- {node}: the field node itself. +- {deferUsage}: the Defer Usage record corresponding to the deferred fragment + enclosing this field, not defined if the field was not deferred. + +Defer Usage records contain information derived from the presence of a `@defer` +directive on a fragment and are structures containing: + +- {label}: value of the corresponding argument to the `@defer` directive. +- {parentDeferUsage}: the parent Defer Usage record corresponding to the + deferred fragment enclosing this deferred fragment, not defined if this Defer + Usage record is deferred directly by the initial result. -As an example, collecting the fields of this selection set would collect two -instances of the field `a` and one of field `b`: +As an example, collecting the fields of this selection set would return field +details related to two instances of the field `a` and one of field `b`: ```graphql example { @@ -389,9 +406,11 @@ The depth-first-search order of the field groups produced by {CollectFields()} is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. -CollectFields(objectType, selectionSet, variableValues, visitedFragments): +CollectFields(objectType, selectionSet, variableValues, deferUsage, +newDeferUsages, visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. +- If {newDeferUsages} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that @@ -407,14 +426,23 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {selection} is a {Field}: - Let {responseKey} be the response key of {selection} (the alias if defined, otherwise the field name). + - Let {fieldDetails} be a new Field Details record created from {selection} + and {deferUsage}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - - Append {selection} to the {groupForResponseKey}. + - Append {fieldDetails} to the {groupForResponseKey}. - If {selection} is a {FragmentSpread}: - Let {fragmentSpreadName} be the name of {selection}. - - If {fragmentSpreadName} is in {visitedFragments}, continue with the next - {selection} in {selectionSet}. - - Add {fragmentSpreadName} to {visitedFragments}. + - If {fragmentSpreadName} provides the directive `@defer` and its {if} + argument is not {false} and is not a variable in {variableValues} with the + value {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is not defined: + - If {fragmentSpreadName} is in {visitedFragments}, continue with the next + {selection} in {selectionSet}. + - Add {fragmentSpreadName} to {visitedFragments}. - Let {fragment} be the Fragment in the current Document whose name is {fragmentSpreadName}. - If no such {fragment} exists, continue with the next {selection} in @@ -423,9 +451,17 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): - If {DoesFragmentTypeApply(objectType, fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {fragment}. + - If {deferDirective} is defined: + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {fragmentDeferUsage} be a new Defer Usage record created from + {label} and {deferUsage}. + - Add {fragmentDeferUsage} to {newDeferUsages}. + - Otherwise: + - Let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + fragmentDeferUsage, newDeferUsages, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. @@ -438,16 +474,30 @@ CollectFields(objectType, selectionSet, variableValues, visitedFragments): fragmentType)} is false, continue with the next {selection} in {selectionSet}. - Let {fragmentSelectionSet} be the top-level selection set of {selection}. + - If {InlineFragment} provides the directive `@defer` and its {if} argument + is not {false} and is not a variable in {variableValues} with the value + {false}: + - Let {deferDirective} be that directive. + - If this execution is for a subscription operation, raise a _field + error_. + - If {deferDirective} is defined: + - Let {label} be the value or the variable to {deferDirective}'s {label} + argument. + - Let {fragmentDeferUsage} be a new Defer Usage record created from + {label} and {deferUsage}. + - Add {fragmentDeferUsage} to {newDeferUsages}. + - Otherwise: + - Let {fragmentDeferUsage} be {deferUsage}. - Let {fragmentGroupedFieldSet} be the result of calling {CollectFields(objectType, fragmentSelectionSet, variableValues, - visitedFragments)}. + fragmentDeferUsage, newDeferUsages, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. -- Return {groupedFields}. +- Return {groupedFields} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -464,6 +514,116 @@ DoesFragmentTypeApply(objectType, fragmentType): Note: The steps in {CollectFields()} evaluating the `@skip` and `@include` directives may be applied in either order since they apply commutatively. +### Field Plan Generation + +In the second step, the original grouped field set is converted into a field +plan via analysis of the Field Details. + +A Field Plan record is a structure containing: + +- {groupedFieldSet}: the grouped field set for the current result. +- {newGroupedFieldSets}: an unordered map containing additional grouped field + sets for related to previously encountered Defer Usage records. The map is + keyed by the unique set of Defer Usage records to which these new grouped + field sets belong. (See below for an explanation of why these additional + grouped field sets may be required.) +- {newGroupedFieldSetsRequiringDeferral}: a map containing additional grouped + field sets for new incremental results relating to the newly encountered + deferred fragments. The map is keyed by the set of Defer Usage records to + which these new grouped field sets belong. + +Additional grouped field sets are constructed carefully so as to ensure that +each field is executed exactly once and so that fields are grouped according to +the set of deferred fragments that include them. + +Deferred grouped field sets do not always require initiating deferral. For +example, when a parent field is deferred by multiple fragments, deferral is +initiated on the parent field. New grouped field sets for child fields will be +created if the child fields are not all present in all of the deferred +fragments, but these new grouped field sets, while representing deferred fields, +do not require additional deferral. The produced field plan will also retain +this information. + +BuildFieldPlan(groupedFieldSet, parentDeferUsages): + +- If {parentDeferUsages} is not provided, initialize it to the empty set. +- Initialize {originalGroupedFieldSet} to an empty ordered map. +- Initialize {newGroupedFieldSets} to an empty unordered map. +- Initialize {newGroupedFieldSetsRequiringDeferral} to an empty unordered map. +- For each {responseKey} and {groupForResponseKey} of {groupedFieldSet}: + - Let {deferUsageSet} be the result of + {GetDeferUsageSet(groupForResponseKey)}. + - If {IsSameSet(deferUsageSet, parentDeferUsages)} is {true}: + - Let {groupedFieldSet} be {originalGroupedFieldSet}. + - Otherwise: + - Let {groupedFieldSets} be {newGroupedFieldSetsRequiringDeferral} if + {ShouldInitiateDefer(deferUsageSet, parentDeferUsages)} is {true}, + otherwise let it be {newGroupedFieldSets}: + - For each {key} in {groupedFieldSets}: + - If {IsSameSet(key, deferUsageSet)} is {true}: + - Let {groupedFieldSet} be the map in {groupedFieldSets} for {key}. + - If {groupedFieldSet} is not defined: + - Initialize {groupedFieldSet} to an empty ordered map. + - Set the entry for {deferUsageSet} in {groupedFieldSets} to + {groupedFieldSet}. + - Set the entry for {responseKey} in {originalGroupedFieldSet} to + {groupForResponseKey}. +- Let {fieldPlan} be a new Field Plan record created from + {originalGroupedFieldSet}, {newGroupedFieldSets}, and + {newGroupedFieldSetsRequiringDeferral}. +- Return {fieldPlan}. + +GetDeferUsageSet(fieldDetailsList): + +- Initialize {deferUsageSet} to the empty set. +- Let {inInitialResult} be {false}. +- For each {fieldDetails} in {fieldDetailsList}: + - Let {deferUsage} be the corresponding entry on {fieldDetails}. + - If {deferUsage} is not defined: + - Let {inInitialResult} be {true}. + - Continue to the next {fieldDetails} in {fieldDetailsList}. + - Add {deferUsage} to {deferUsageSet}. +- If {inInitialResult} is true, reset {deferUsageSet} to the empty set; + otherwise, let {deferUsageSet} be the result of + {FilterDeferUsages(deferUsageSet)}. +- Return {deferUsageSet}. + +FilterDeferUsages(deferUsages): + +- Initialize {filteredDeferUsages} to the empty set. +- For each {deferUsage} in {deferUsages}: + - Let {ancestors} be the result of {GetAncestors(deferUsage)}. + - For each {ancestor} of {ancestors}: + - If {ancestor} is in {deferUsages}. + - Continue to the next {deferUsage} in {deferUsages}. + - Add {deferUsage} to {filteredDeferUsages}. +- Return {filteredDeferUsages}. + +GetAncestors(deferUsage): + +- Initialize {ancestors} to an empty list. +- Let {parentDeferUsage} be the corresponding entry on {deferUsage}. +- If {parentDeferUsage} is not defined, return {ancestors}. +- Append {parentDeferUsage} to {ancestors}. +- Append all the items in {GetAncestors(parentDeferUsage)} to {ancestors}. +- Return {ancestors}. + +ShouldInitiateDefer(deferUsageSet, parentDeferUsageSet): + +- For each {deferUsage} in {deferUsageSet}: + - If {parentDeferUsageSet} does not contain {deferUsage}: + - Return {true}. +- Return {false}. + +IsSameSet(setA, setB): + +- If the size of setA is not equal to the size of setB: + - Return {false}. +- For each {item} in {setA}: + - If {setB} does not contain {item}: + - Return {false}. +- Return {true}. + ### Processing Incremental Digests An Incremental Digest is a structure containing: @@ -475,11 +635,53 @@ An Incremental Digest is a structure containing: contain additional Incremental Digests that will immediately or eventually complete those results. -ProcessIncrementalDigests(incrementalDigests): +Given the current state of any pending results, if any, the +{ProcessIncrementalDigests()} algorithm describes how incremental digests are +processed to update that state as incremental digests are encountered. + +ProcessIncrementalDigests(incrementalDigests, originalDeferStates): +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. - Let {newPendingResults} and {futures} be lists containing all of the items from the corresponding lists within each item of {incrementalDigests}. -- Return {newPendingResults} and {futures}. +- For each {future} in {futures}: + - Let {deferredFragments} be the list of deferred fragments completed by + {future}. + - For each {deferredFragment} of {deferredFragments}: + - Let {deferState} be the entry in {deferStates} for {deferredFragment}. + - Let {newDeferState} be a new unordered map containing all of the entries + in {deferState}. + - Let {count} be the corresponding entry on {newDeferState} for + {deferredFragment}. + - Let {newCount} be {count} + 1 if {count} is defined, otherwise {0}. + - Set the entry for {deferredFragment} in {deferStates} to {deferState}. +- Initialize {pending} to an empty list. +- For each {newPendingResult} in {newPendingResults}: + - If {newPendingResult} is a deferred fragment: + - Let {deferState} be the entry in {deferStates} for {newPendingResult}. + - Let {parent} and {parentDeferState} be the result of + {GetParentAndParentDeferState(deferState, deferStates)}. + - If {parent} is not defined: + - Append {newPendingResult} to {pending}. + - Otherwise: + - Let {newParentDeferState} be an unordered map containing all of the + entries on {parentDeferState}. + - Let {children} be a new list containing all of the entries on {children} + on {newParentDeferState}. + - Append {newPendingResult} to {children}. + - Set the corresponding entry on {newParentDeferState} to {children}. + - Set the entry for {parent} in {deferStates} to {newDeferState}. +- Return {pending}, {futures}, and {deferStates}. + +GetParentAndPendingInfo(pendingInfo, pendingMap): + +- Let {ancestors} be the corresponding entry on {pendingInfo}. +- For each {ancestor} of {ancestors}: + - Let {ancestorPendingInfo} be the entry in {pendingMap} for {ancestor}. + - If {ancestorPendingInfo} is defined, return {ancestor} and + {ancestorPendingInfo}. +- Return. ### Yielding Subsequent Results @@ -489,22 +691,29 @@ initiated. Then, any completed future executions are processed to determine the payload to be yielded. Finally, if any pending results remain, the procedure is repeated recursively. -YieldSubsequentResults(originalIds, newFutures, initiatedFutures): +YieldSubsequentResults(originalIds, originalDeferStates, newFutures, +initiatedFutures, pendingFutures): - Initialize {futures} to a list containing all items in {initiatedFutures}. +- If {pendingFutures} is not provided, initialize it to an empty list. - For each {future} in {newFutures}: - - If {future} has not been initiated, initiate it. - - Append {future} to {futures}. -- Wait for any future execution contained in {futures} to complete. -- Let {updates}, {newPendingResults}, {newestFutures}, and {remainingFutures} be - the result of {ProcessCompletedFutures(futures)}. + - If {future} contributes to a pending result that has been sent: + - If {future} has not been initiated, initiate it. + - Append {future} to {futures}. + - Otherwise: + - Append {future} to {pendingFutures}. +- Wait for any future execution contained in {maybeCompletedFutures} to + complete. +- Let {deferStates}, {updates}, {newPendingResults}, {newestFutures}, + {remainingFutures}, and {pendingFutures} be the result of + {ProcessCompletedFutures(futures, originalDeferStates)}. - Let {ids} and {payload} be the result of {GetIncrementalPayload(newPendingResults, originalIds, updates)}. -- Yield {payload}. +- If {hasNext} is not the only entry on {payload}, yield {payload}. - If {hasNext} on {payload} is {false}: - Complete this subsequent result stream and return. -- Yield the results of {YieldSubsequentResults(ids, newestFutures, - remainingFutures)}. +- Yield the results of {YieldSubsequentResults(ids, deferStates, newestFutures, + remainingFutures, pendingFutures)}. GetIncrementalPayload(newPendingResults, originalIds, updates): @@ -530,14 +739,25 @@ GetIncrementalPayload(newPendingResults, originalIds, updates): {errors}. - Append {completedEntry} to {completed}. - For each {incrementalResult} in {incremental}: - - Let {stream} be the corresponding entry on {incrementalResult}. - - Let {id} be the corresponding entry on {ids} for {stream}. - - If {id} is not defined, continue to the next {incrementalResult} in - {incremental}. - - Let {items} and {errors} be the corresponding entries on - {incrementalResult}. - - Let {incrementalEntry} be an unordered map containing {id}, {items}, and - {errors}. + - If {incrementalResult} represents completion of Stream Items: + - Let {stream} be the corresponding entry on {incrementalResult}. + - Let {id} be the corresponding entry on {ids} for {stream}. + - If {id} is not defined, continue to the next {incrementalResult} in + {incremental}. + - Let {items} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {items}, and + {errors}. + - Otherwise: + - Let {id} and {subPath} be the result of calling + {GetIdAndSubPath(incrementalResult, ids)}. + - If {id} is not defined, continue to the next {incrementalResult} in + {incremental}. + - Let {data} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {data}, and + {errors}. + - Append {incrementalEntry} to {incremental}. - Let {hasNext} be {false} if {ids} is empty, otherwise {true}. - Let {payload} be an unordered map containing {hasNext}. - If {pending} is not empty: @@ -548,6 +768,29 @@ GetIncrementalPayload(newPendingResults, originalIds, updates): - Set the corresponding entry on {payload} to {completed}. - Return {ids} and {payload}. +GetIdAndSubPath(deferredResult, ids): + +- Initialize {releasedFragments} to an empty list. +- Let {deferredFragments} be the corresponding entry on {deferredResult}. +- For each {deferredFragment} in {deferredFragments}: + - Let {id} be the entry for {deferredFragment} on {ids}. + - If {id} is defined, append {deferredFragment} to {releasedFragments}. +- Let {currentFragment} be the first member of {releasedFragments}. +- Let {currentPath} be the entry for {path} on {firstDeferredFragment}. +- Let {currentPathLength} be the length of {currentPath}. +- For each remaining {deferredFragment} within {deferredFragments}. + - Let {path} be the corresponding entry on {deferredFragment}. + - Let {pathLength} be the length of {path}. + - If {pathLength} is larger than {currentPathLength}: + - Set {currentPathLength} to {pathLength}. + - Set {currentFragment} to {deferredFragment}. +- Let {id} be the entry on {ids} for {currentFragment}. +- If {id} is not defined, return. +- Let {path} be the corresponding entry on {currentFragment}. +- Let {subPath} be the subset of {path}, omitting the first {currentPathLength} + entries. +- Return {id} and {subPath}. + ### Processing Completed Futures As future executions are completed, the {ProcessCompletedFutures()} algorithm @@ -565,18 +808,34 @@ for any completed futures, as long as the new Incremental Digests do not contain any new pending results. If they do, first a new payload is yielded, notifying the client that new pending results have been encountered. -ProcessCompletedFutures(maybeCompletedFutures, updates, pending, -incrementalDigests, remainingFutures): +ProcessCompletedFutures(maybeCompletedFutures, originalDeferStates, updates, +pending, incrementalDigests, remainingFutures, pendingFutures): -- If {updates}, {pending}, {incrementalDigests}, or {remainingFutures} are not - provided, initialize them to empty lists. +- If {updates}, {pending}, {incrementalDigests}, {remainingFutures}, or + {pendingFutures} are not provided, initialize them to empty lists. - Let {completedFutures} be a list containing all completed futures from {maybeCompletedFutures}; append the remaining futures to {remainingFutures}. +- Let {deferStates} be {originalDeferStates}. - Initialize {supplementalIncrementalDigests} to an empty list. - For each {completedFuture} in {completedFutures}: - Let {result} be the result of {completedFuture}. - - Let {update} and {resultIncrementalDigests} be the result of calling - {GetUpdatesForStreamItems(result)}. + - If {result} represents completion of Stream Items: + - Let {update} and {resultIncrementalDigests} be the result of calling + {GetUpdatesForStreamItems(result)}. + - Let {remainingPendingFutures} be {pendingFutures}. + - Otherwise: + - Let {deferStates}, {update}, {resultPending}, and + {resultIncrementalDigests} be the result of calling + {GetUpdatesForDeferredResult(deferStates, result)}. + - Append all items in {resultPending} to {pending}. + - Initialize {releasedFutures} and {remainingPendingFutures} to empty lists. + - For each {future} in {pendingFutures}: + - Let {deferredFragments} be the Deferred Fragments completed by {future}. + - For each {deferredFragment} of {deferredFragments}: + - If {deferredFragment} is in {resultPending}, append {future} to + {releasedFutures}. + - Continue to the next {future} in {pendingFutures}. + - Append {future} to {remainingPendingFutures}. - Append {update} to {updates}. - For each {resultIncrementalDigest} in {resultIncrementalDigests}: - If {resultIncrementalDigest} contains a {newPendingResults} entry: @@ -584,15 +843,16 @@ incrementalDigests, remainingFutures): - Otherwise: - Append {resultIncrementalDigest} to {supplementalIncrementalDigests}. - If {supplementalIncrementalDigests} is empty: - - Let {newPendingResults} and {futures} be the result of - {ProcessIncrementalDigests(incrementalDigests)}. + - Let {newPendingResults}, {futures}, and {deferStates} be the result of + {ProcessIncrementalDigests(incrementalDigests, originalDeferStates)}. - Append all items in {newPendingResults} to {pending}. - - Return {updates}, {pending}, {newFutures}, and {remainingFutures}. -- Let {newPendingResults} and {futures} be the result of - {ProcessIncrementalDigests(supplementalIncrementalDigests)}. + - Return {deferStates}, {updates}, {pending}, {newFutures}, + {remainingFutures}, and {remainingPendingFutures}. +- Let {newPendingResults}, {futures}, and {deferStates} be the results of + {ProcessIncrementalDigests(supplementalIncrementalDigests, deferStates)}. - Append all items in {newPendingResults} to {pending}. -- Return the result of {ProcessCompletedFutures(futures, updates, pending, - incrementalDigests, remainingFutures)}. +- Return the result of {ProcessCompletedFutures(futures, deferStates, updates, + pending, incrementalDigests, remainingFutures, remainingPendingFutures)}. GetUpdatesForStreamItems(streamItems): @@ -609,8 +869,114 @@ GetUpdatesForStreamItems(streamItems): - Let {incremental} be a list containing {streamItems}. - Let {update} be an unordered map containing {incremental}. - Let {incrementalDigests} be the corresponding entry on {streamItems}. -- Let {updates} be a list containing {update}. -- Return {updates} and {incrementalDigests}. +- Return {update} and {incrementalDigests}. + +GetUpdatesForDeferredResult(originalDeferStates, deferredResult): + +- Let {deferStates} be a new unordered map containing all of the entries in + {originalDeferStates}. +- Initialize {incrementalDigests} to an empty list. +- Let {deferredFragments}, {data}, and {errors} be the corresponding entries on + {deferredResult}. +- Initialize {completed} to an empty list. +- If {data} is {null}: + - For each {deferredFragment} of {deferredFragments}: + - Let {deferState} be the entry on {deferStates} for {deferredFragment}. + - If {deferState} is not defined, continue to the next {deferredFragment} of + {deferredFragments}. + - Remove the entry for {deferredFragment} on {completed}. + - Append {deferredFragment} to {completed}. + - Let {update} be an unordered map containing {completed} and {errors}. + - Return {update} and {incrementalDigests}. +- Initialize {incremental} to an empty list. +- Initialize {newPending} to the empty set. +- For each {deferredFragment} of {deferredFragments}: + - Let {deferState} be the entry on {deferStates} for {deferredFragment}. + - If {deferState} is not defined, continue to the next {deferredFragment} of + {deferredFragments}. + - Let {newDeferState} be a new unordered map containing all entries on + {deferState}. + - Set the entry for {deferredFragment} on {deferStates} to {newDeferState}. + - Decrement {count} on {newDeferState}. + - Let {pending} be a new set containing all of the members of {pending} on + {newDeferState}. + - Set the corresponding entry on {newDeferState} to {pending}. + - Add {deferredResult} to {pending}. + - If {count} on {newDeferState} is equal to {0}: + - Let {children} be the corresponding entry on {newDeferState}. + - Add all items in {children} to {newPending}. + - Remove the entry for {deferredFragment} on {deferStates}. + - Append {deferredFragment} to {completed}. + - Append all items in {pending} on {newDeferState} to {incremental}. +- For each {deferredResult} in {completed}: + - Let {deferredFragments} be the corresponding entry on {deferredResult}. + - For each {deferredFragment} in {deferredFragments}: + - Let {deferState} be the entry on {deferStates} for {deferredFragment}. + - If {deferState} is not defined, continue to the next {deferredFragment} of + {deferredFragments}. + - Let {pending} be the corresponding entry on {deferState}. + - If {pending} contains {deferredResult}: + - Let {newDeferState} be a new unordered map containing all entries on + {deferState}. + - Set the entry for {deferredFragment} on {deferStates} to {deferState}. + - Let {pending} be a new set containing all of the members of {pending} on + {newDeferState}. + - Set the corresponding entry on {newDeferState} to {pending}. + - Remove {deferredResult} from {pending}. +- Let {update} be an unordered map containing {incremental} and {completed}. +- Return {deferStates}, {update}, {newPending}, and {incrementalDigests}. + +## Executing a Field Plan + +To execute a field plan, the object value being evaluated and the object type +need to be known, as well as whether the non-deferred grouped field set must be +executed serially, or may be executed in parallel. + +ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, objectValue, +variableValues, serial, path, deferUsageSet, deferMap): + +- If {path} is not provided, initialize it to an empty list. +- Let {groupedFieldSet}, {newGroupedFieldSets}, {newDeferUsages}, and + {newGroupedFieldSetsRequiringDeferral} be the corresponding entries on + {fieldPlan}. +- Let {newDeferMap} and {newPendingResults} be the result of + {GetNewDeferredFragments(newDeferUsages, path, deferMap)}. +- Allowing for parallelization, perform the following steps: + - Let {data} and {nestedIncrementalDigests} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is + {true}, _normally_ (allowing parallelization) otherwise. + - Let {incrementalDigest} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, false, path, newDeferMap)}. + - Let {deferredIncrementalDigest} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSetsRequiringDeferral, true, path, newDeferMap)}. +- Set the corresponding entry on {deferredIncrementalDigest} to + {newPendingResults}. +- Let {incrementalDigests} be a list containing {incrementalDigest}, + {deferredIncrementalDigest}, and all items in {nestedIncrementalDigests}. +- Return {data} and {incrementalDigests}. + +GetNewDeferredFragments(newDeferUsages, path, deferMap): + +- Initialize {newDeferredFragments} to an empty list. +- If {newDeferUsages} is empty: + - Return {deferMap} and {newDeferredFragments}. +- Let {newDeferMap} be a new unordered map of Defer Usage records to Deferred + Fragment records containing all of the entries in {deferMap}. +- For each {deferUsage} in {newDeferUsages}: + - Initialize {ancestors} to an empty list. + - Let {deferUsageAncestors} be the result of {GetAncestors(deferUsage)}. + - For each {deferUsageAncestor} of {deferUsageAncestors}: + - Let {ancestor} be the entry in {deferMap} for {deferUsageAncestor}. + - Append {ancestor} to {ancestors}. + - Let {label} be the corresponding entry on {deferUsage}. + - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} + and {label}. + - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. + - Append {newDeferredFragment} to {newDeferredFragments}. +- Return {newDeferMap} and {newDeferredFragments}. ## Executing a Grouped Field Set @@ -622,9 +988,8 @@ Each represented field in the grouped field set produces an entry into a response map. ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, -path): +path, deferUsageSet, deferMap): -- If {path} is not provided, initialize it to an empty list. - Initialize {resultMap} to an empty ordered map. - Initialize {incrementalDigests} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: @@ -658,19 +1023,23 @@ to a location that has resolved to {null} due to propagation of a field error. If these subsequent results have not yet executed or have not yet yielded a value they may also be cancelled to avoid unnecessary work. -For example, assume the field `alwaysThrows` is a list of `Non-Null` type where -completion of the list item always raises a field error: +For example, assume the field `alwaysThrows` is a `Non-Null` type that always +raises a field error: ```graphql example { - myObject(initialCount: 1) @stream { + myObject { + ... @defer { + name + } alwaysThrows } } ``` -In this case, only one response should be sent. Subsequent items from the stream -should be ignored and their completion, if initiated, may be cancelled. +In this case, only one response should be sent. The result of the fragment +tagged with the `@defer` directive should be ignored and its execution, if +initiated, may be cancelled. ```json example { @@ -777,9 +1146,48 @@ A correct executor must generate the following result for that selection set: } ``` -When subsections contain a `@stream` directive, these subsections are no longer -required to execute serially. Execution of the streamed sections of the -subsection may be executed in parallel, as defined in {ExecuteStreamField}. +When subsections contain a `@stream` or `@defer` directive, these subsections +are no longer required to execute serially. Execution of the deferred or +streamed sections of the subsection may be executed in parallel, as defined in +{ExecuteDeferredGroupedFieldSets} and {ExecuteStreamField}. + +## Executing Deferred Grouped Field Sets + +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +newGroupedFieldSets, shouldInitiateDefer, path, deferMap): + +- Initialize {futures} to an empty list. +- For each {deferUsageSet} and {newGroupedFieldSet} in {newGroupedFieldSets}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {groupedFieldSet} be the corresponding entries on {newGroupedFieldSet}. + - Let {future} represent the future execution of + {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, deferredFragments, path, deferUsageSet, deferMap)}, + incrementally completing {deferredFragments}. + - If {shouldInitiateDefer} is {false}: + - Initiate {future}. + - Otherwise, if early execution of deferred fields is desired: + - Following any implementation specific deferral of further execution, + initiate {future}. + - Append {future} to {futures}. +- Let {incrementalDigest} be a new Incremental Digest created from {futures}. +- Return {incrementalDigest}. + +ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data} and {incrementalDigests} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. +- Let {deferredResult} be an unordered map containing {path}, + {deferredFragments}, {data}, {errors}, and {incrementalDigests}. +- Return {deferredResult}. ## Executing Fields @@ -789,17 +1197,19 @@ coerces any provided argument values, then resolves a value for the field, and finally completes that value either by recursively executing another selection set or coercing a scalar value. -ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path): +ExecuteField(objectType, objectValue, fieldType, fieldDetailsList, +variableValues, path, deferUsageSet, deferMap): -- Let {field} be the first entry in {fields}. -- Let {fieldName} be the field name of {field}. +- Let {fieldDetails} be the first entry in {fieldDetailsList}. +- Let {node} be the corresponding entry on {fieldDetails}. +- Let {fieldName} be the field name of {node}. - Append {fieldName} to {path}. - Let {argumentValues} be the result of {CoerceArgumentValues(objectType, field, variableValues)} - Let {resolvedValue} be {ResolveFieldValue(objectType, objectValue, fieldName, argumentValues)}. - Return the result of {CompleteValue(fieldType, fields, resolvedValue, - variableValues, path)}. + variableValues, path, deferUsageSet, deferMap)}. ### Coercing Field Arguments @@ -899,7 +1309,8 @@ completion process. In the case where `@stream` is specified on a field of list type, value completion iterates over the collection until the number of items yielded items satisfies `initialCount` specified on the `@stream` directive. -CompleteValue(fieldType, fields, result, variableValues, path): +CompleteValue(fieldType, fieldDetailsList, result, variableValues, path, +deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. @@ -932,6 +1343,8 @@ CompleteValue(fieldType, fields, result, variableValues, path): - If {streamDirective} is defined and {index} is greater than or equal to {initialCount}: - Let {stream} be an unordered map containing {path} and {label}. + - Let {streamFieldDetails} be the result of + {GetStreamFieldDetailsList(fieldDetailsList)}. - Let {future} represent the future execution of {ExecuteStreamField(stream, iterator, streamFieldDetailsList, index, innerType, variableValues)}. @@ -961,15 +1374,26 @@ CompleteValue(fieldType, fields, result, variableValues, path): - Let {objectType} be {fieldType}. - Otherwise if {fieldType} is an Interface or Union type. - Let {objectType} be {ResolveAbstractType(fieldType, result)}. - - Let {groupedFieldSet} be the result of calling {CollectSubfields(objectType, - fields, variableValues)}. - - Return the result of evaluating {ExecuteGroupedFieldSet(groupedFieldSet, - objectType, result, variableValues)} _normally_ (allowing for - parallelization). + - Let {groupedFieldSet} and {newDeferUsages} be the result of calling + {CollectSubfields(objectType, fieldDetailsList, variableValues)}. + - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet, + deferUsageSet)}. + - Return the result of {ExecuteFieldPlan(newDeferUsages, fieldPlan, + objectType, result, variableValues, false, path, deferUsageSet, deferMap)}. + +GetStreamFieldDetailsList(fieldDetailsList): + +- Let {streamFields} be an empty list. +- For each {fieldDetails} in {fieldDetailsList}: + - Let {node} be the corresponding entry on {fieldDetails}. + - Let {newFieldDetails} be a new Field Details record created from {node}. + - Append {newFieldDetails} to {streamFields}. +- Return {streamFields}. #### Execute Stream Field -ExecuteStreamField(stream, iterator, fields, index, innerType, variableValues): +ExecuteStreamField(stream, iterator, fieldDetailsList, index, innerType, +variableValues): - Let {path} be the corresponding entry on {stream}. - Let {itemPath} be {path} with {index} appended. @@ -984,7 +1408,7 @@ ExecuteStreamField(stream, iterator, fields, index, innerType, variableValues): - Let {errors} be the list of all _field error_ raised while completing the item. - Let {future} represent the future execution of {ExecuteStreamField(stream, - path, iterator, fields, nextIndex, innerType, variableValues)}. + path, iterator, fieldDetailsList, nextIndex, innerType, variableValues)}. - If early execution of streamed fields is desired: - Following any implementation specific deferral of further execution, initiate {future}. @@ -1060,14 +1484,15 @@ sub-selections. After resolving the value for `me`, the selection sets are merged together so `firstName` and `lastName` can be resolved for one value. -CollectSubfields(objectType, fields, variableValues): +CollectSubfields(objectType, fieldDetailsList, variableValues): - Let {groupedFieldSet} be an empty map. -- For each {field} in {fields}: +- For each {fieldDetails} in {fieldDetailsList}: + - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues)}. + fieldSelectionSet, variableValues, deferUsage)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. @@ -1107,9 +1532,83 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. -When a field error is raised inside `ExecuteStreamField`, the stream payloads -act as error boundaries. That is, the null resulting from a `Non-Null` type -cannot propagate outside of the boundary of the stream payload. +When a field error is raised inside `ExecuteDeferredGroupedFieldSets` or +`ExecuteStreamField`, the defer and stream payloads act as error boundaries. +That is, the null resulting from a `Non-Null` type cannot propagate outside of +the boundary of the defer or stream payload. + +If a field error is raised while executing the selection set of a fragment with +the `defer` directive, causing a {null} to propagate to the object containing +this fragment, the {null} should not be sent to the client, as this will +overwrite existing data. In this case, the associated Defer Payload's +`completed` entry must include the causative errors, whose presence indicated +the failure of the payload to be included within the final reconcilable object. + +For example, assume the `month` field is a `Non-Null` type that raises a field +error: + +```graphql example +{ + birthday { + ... @defer(label: "monthDefer") { + month + } + ... @defer(label: "yearDefer") { + year + } + } +} +``` + +Response 1, the initial response is sent: + +```json example +{ + "data": { "birthday": {} }, + "pending": [ + { "path": ["birthday"], "label": "monthDefer" } + { "path": ["birthday"], "label": "yearDefer" } + ], + "hasNext": true +} +``` + +Response 2, the defer payload for label "monthDefer" is completed with errors. +Incremental data cannot be sent, as this would overwrite previously sent values. + +```json example +{ + "completed": [ + { + "path": ["birthday"], + "label": "monthDefer", + "errors": [...] + } + ], + "hasNext": false +} +``` + +Response 3, the defer payload for label "yearDefer" is sent. The data in this +payload is unaffected by the previous null error. + +```json example +{ + "incremental": [ + { + "path": ["birthday"], + "data": { "year": "2022" } + } + ], + "completed": [ + { + "path": ["birthday"], + "label": "yearDefer" + } + ], + "hasNext": false +} +``` If the `stream` directive is present on a list field with a Non-Nullable inner type, and a field error has caused a {null} to propagate to the list item, the diff --git a/spec/Section 7 -- Response.md b/spec/Section 7 -- Response.md index 112c7f6ff..45d24a59e 100644 --- a/spec/Section 7 -- Response.md +++ b/spec/Section 7 -- Response.md @@ -24,13 +24,14 @@ validation error, this entry must not be present. When the response of the GraphQL operation is a response stream, the first value will be the initial response. All subsequent values may contain an `incremental` -entry, containing a list of Stream payloads. +entry, containing a list of Defer or Stream payloads. -The `label` and `path` entries on Stream payloads are used by clients to -identify the `@stream` directive from the GraphQL operation that triggered this -response to be included in an `incremental` entry on a value returned by the -response stream. When a label is provided, the combination of these two entries -will be unique across all Stream payloads returned in the response stream. +The `label` and `path` entries on Defer and Stream payloads are used by clients +to identify the `@defer` or `@stream` directive from the GraphQL operation that +triggered this response to be included in an `incremental` entry on a value +returned by the response stream. When a label is provided, the combination of +these two entries will be unique across all Defer and Stream payloads returned +in the response stream. If the response of the GraphQL operation is a response stream, each response map must contain an entry with key `hasNext`. The value of this entry is `true` for @@ -48,9 +49,9 @@ set, must have a map as its value. This entry is reserved for implementors to extend the protocol however they see fit, and hence there are no additional restrictions on its contents. When the response of the GraphQL operation is a response stream, implementors may send subsequent response maps containing only -`hasNext` and `extensions` entries. Stream payloads may also contain an entry -with the key `extensions`, also reserved for implementors to extend the protocol -however they see fit. +`hasNext` and `extensions` entries. Defer and Stream payloads may also contain +an entry with the key `extensions`, also reserved for implementors to extend the +protocol however they see fit. To ensure future changes to the protocol do not break existing services and clients, the top level response map must not contain any entries other than the @@ -266,32 +267,40 @@ discouraged. ### Incremental Delivery The `pending` entry in the response is a non-empty list of references to pending -Stream results. If the response of the GraphQL operation is a response stream, -this field should appear on the initial and possibly subsequent payloads. +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field should appear on the initial and possibly subsequent +payloads. The `incremental` entry in the response is a non-empty list of data fulfilling -Stream results. If the response of the GraphQL operation is a response stream, -this field may appear on the subsequent payloads. +Defer or Stream results. If the response of the GraphQL operation is a response +stream, this field may appear on the subsequent payloads. The `completed` entry in the response is a non-empty list of references to -completed Stream results. +completed Defer or Stream results. -For example: +For example, a query containing both defer and stream: ```graphql example query { person(id: "cGVvcGxlOjE=") { + ...HomeWorldFragment @defer(label: "homeWorldDefer") name films @stream(initialCount: 1, label: "filmsStream") { title } } } +fragment HomeWorldFragment on Person { + homeWorld { + name + } +} ``` The response stream might look like: -Response 1, the initial response does not contain any streamed results. +Response 1, the initial response does not contain any deferred or streamed +results. ```json example { @@ -301,21 +310,29 @@ Response 1, the initial response does not contain any streamed results. "films": [{ "title": "A New Hope" }] } }, - "pending": [{ "path": ["person", "films"], "label": "filmStream" }], + "pending": [ + { "path": ["person"], "label": "homeWorldDefer" }, + { "path": ["person", "films"], "label": "filmStream" } + ], "hasNext": true } ``` -Response 2, contains the first stream payload. +Response 2, contains the defer payload and the first stream payload. ```json example { "incremental": [ + { + "path": ["person"], + "data": { "homeWorld": { "name": "Tatooine" } } + }, { "path": ["person", "films"], "items": [{ "title": "The Empire Strikes Back" }] } ], + "completed": [{ "path": ["person"], "label": "homeWorldDefer" }], "hasNext": true } ``` @@ -363,6 +380,21 @@ the field with the associated `@stream` directive. If an error has caused a `null` to bubble up to a field higher than the list field with the associated `@stream` directive, then the stream will complete with errors. +#### Deferred data + +Deferred data is a map that may appear as an item in the `incremental` entry of +a response. Deferred data is the result of an associated `@defer` directive in +the operation. A defer payload must contain `data` and `path` entries and may +contain `errors`, and `extensions` entries. + +##### Data + +The `data` entry in a Defer payload will be of the type of a particular field in +the GraphQL result. The adjacent `path` field will contain the path segments of +the field this data is associated with. If an error has caused a `null` to +bubble up to a field higher than the field that contains the fragment with the +associated `@defer` directive, then the fragment will complete with errors. + #### Path A `path` field allows for the association to a particular field in a GraphQL @@ -382,16 +414,21 @@ integer indicates that this result is set at a range, where the beginning of the range is at the index of this integer, and the length of the range is the length of the data. +When the `path` field is present on a Defer payload, it indicates that the +`data` field represents the result of the fragment containing the corresponding +`@defer` directive. The path segments must point to the location of the result +of the field containing the associated `@defer` directive. + When the `path` field is present on an "Error result", it indicates the response field which experienced the error. #### Label -Stream may contain a string field `label`. This `label` is the same label passed -to the `@stream` directive associated with the response. This allows clients to -identify which `@stream` directive is associated with this value. `label` will -not be present if the corresponding `@stream` directive is not passed a `label` -argument. +Stream and Defer payloads may contain a string field `label`. This `label` is +the same label passed to the `@defer` or `@stream` directive associated with the +response. This allows clients to identify which `@defer` or `@stream` directive +is associated with this value. `label` will not be present if the corresponding +`@defer` or `@stream` directive is not passed a `label` argument. ## Serialization Format @@ -452,10 +489,10 @@ enables more efficient parsing of responses if the order of properties can be anticipated. Serialization formats which represent an ordered map should preserve the order -of requested fields as defined by {AnalyzeSelectionSet()} in the Execution -section. Serialization formats which only represent unordered maps but where -order is still implicit in the serialization's textual order (such as JSON) -should preserve the order of requested fields textually. +of requested fields as defined by {CollectFields()} in the Execution section. +Serialization formats which only represent unordered maps but where order is +still implicit in the serialization's textual order (such as JSON) should +preserve the order of requested fields textually. For example, if the request was `{ name, age }`, a GraphQL service responding in JSON should respond with `{ "name": "Mark", "age": 30 }` and should not respond From 0b25562816d0796dec654c58a6176638ad121c98 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:02:19 +0200 Subject: [PATCH 08/46] Add GetPending algorithm This is a simplified version of GetIncrementalResult useful for the initial result. Readers approaching the spec in order may benefit from a bit of repetition. --- spec/Section 6 -- Execution.md | 23 ++++++++++++++++++----- 1 file changed, 18 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index f91632886..55adee9b7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -350,14 +350,27 @@ serial): {groupedFieldSet}. - Let {newPendingResults}, {futures}, and {deferStates} be the result of {ProcessIncrementalDigests(incrementalDigests)}. -- Let {ids} and {initialPayload} be the result of - {GetIncrementalPayload(newPendingResults)}. -- If {ids} is empty, return an empty unordered map consisting of {data} and +- Let {pending} and {ids} be the result of {GetPending(newPendingResults)}. +- If {pending} is empty, return an unordered map consisting of {data} and {errors}. -- Set the corresponding entries on {initialPayload} to {data} and {errors}. +- Let {hasNext} be {true}. +- Let {initialResult} be an unordered map consisting of {data}, {errors}, + {pending}, and {hasNext}. - Let {subsequentResults} be the result of {YieldSubsequentResults(ids, deferStates, futures)}. -- Return {initialPayload} and {subsequentResults}. +- Return {initialResult} and {subsequentResults}. + +GetPending(newPendingResults): + +- Initialize {pending} to an empty list. +- Initialize {ids} to a new unordered map of pending results to identifiers. +- For each {newPendingResult} in {newPendingResults}: + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {id} be a unique identifier for this {newPendingResult}. + - Set the entry for {newPendingResult} in {ids} to {id}. + - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. + - Append {pendingEntry} to {pending}. +- Return {pending} and {ids}. ### Field Collection From dc443e8a830836bbd1f65e7386e87168cd2b4c93 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:30:11 +0200 Subject: [PATCH 09/46] fix nomenclature for GetParentAndParentDeferState pendingInfo => deferState as we now track ids separately from the defer state; the prior language was left over from an earlier branch --- spec/Section 6 -- Execution.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 55adee9b7..926091da0 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -687,13 +687,13 @@ ProcessIncrementalDigests(incrementalDigests, originalDeferStates): - Set the entry for {parent} in {deferStates} to {newDeferState}. - Return {pending}, {futures}, and {deferStates}. -GetParentAndPendingInfo(pendingInfo, pendingMap): +GetParentAndParentDeferState(deferState, deferStates): -- Let {ancestors} be the corresponding entry on {pendingInfo}. +- Let {ancestors} be the corresponding entry on {deferState}. - For each {ancestor} of {ancestors}: - - Let {ancestorPendingInfo} be the entry in {pendingMap} for {ancestor}. - - If {ancestorPendingInfo} is defined, return {ancestor} and - {ancestorPendingInfo}. + - Let {ancestorDeferState} be the entry in {deferStates} for {ancestor}. + - If {ancestorDeferState} is defined, return {ancestor} and + {ancestorDeferState}. - Return. ### Yielding Subsequent Results From 8cc25073d5e5076c71828d9c17fe3072119a9074 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:44:51 +0200 Subject: [PATCH 10/46] doc: add more prose for ProcessIncrementalDigests to explain why and how it manages defer ordering --- spec/Section 6 -- Execution.md | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 926091da0..42815c95c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -648,9 +648,12 @@ An Incremental Digest is a structure containing: contain additional Incremental Digests that will immediately or eventually complete those results. -Given the current state of any pending results, if any, the -{ProcessIncrementalDigests()} algorithm describes how incremental digests are -processed to update that state as incremental digests are encountered. +Incremental digests must be processed carefully because pending results must be +delivered to the client in the appropriate order. In particular, nested deferred +fragments may complete in any order, and the results of those fragments must be +delivered to the client in the order in which they were specified in the +operation. The {ProcessIncrementalDigests()} algorithm manages the tree that +maintains the correct delivery order. ProcessIncrementalDigests(incrementalDigests, originalDeferStates): From 8c138b069dd80f39604debb99721422ffec8e1c0 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:45:33 +0200 Subject: [PATCH 11/46] fix: add missing incremental digest processing for streams --- spec/Section 6 -- Execution.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 42815c95c..ea469f33e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -688,6 +688,8 @@ ProcessIncrementalDigests(incrementalDigests, originalDeferStates): - Append {newPendingResult} to {children}. - Set the corresponding entry on {newParentDeferState} to {children}. - Set the entry for {parent} in {deferStates} to {newDeferState}. + - Otherwise: + - Append {newPendingResult} to {pending}. - Return {pending}, {futures}, and {deferStates}. GetParentAndParentDeferState(deferState, deferStates): From ea03b3cb57fd3bec01708819cb777a8355bbdc15 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:46:36 +0200 Subject: [PATCH 12/46] nit: add caps for Deferred Fragment --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index ea469f33e..353c445ab 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -674,7 +674,7 @@ ProcessIncrementalDigests(incrementalDigests, originalDeferStates): - Set the entry for {deferredFragment} in {deferStates} to {deferState}. - Initialize {pending} to an empty list. - For each {newPendingResult} in {newPendingResults}: - - If {newPendingResult} is a deferred fragment: + - If {newPendingResult} is a Deferred Fragment: - Let {deferState} be the entry in {deferStates} for {newPendingResult}. - Let {parent} and {parentDeferState} be the result of {GetParentAndParentDeferState(deferState, deferStates)}. From 1be7a5864c52d95810b512dacf0a4b6119bdcb8a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:56:36 +0200 Subject: [PATCH 13/46] fix: remove unnecessary variable when releasing pending futures, they can simply be appended to remainingFutures, i.e. the uncompleted futures, for future processing after the pending is issued. --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 353c445ab..2cc6e8d2c 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -846,12 +846,12 @@ pending, incrementalDigests, remainingFutures, pendingFutures): {resultIncrementalDigests} be the result of calling {GetUpdatesForDeferredResult(deferStates, result)}. - Append all items in {resultPending} to {pending}. - - Initialize {releasedFutures} and {remainingPendingFutures} to empty lists. + - Initialize {remainingPendingFutures} to empty lists. - For each {future} in {pendingFutures}: - Let {deferredFragments} be the Deferred Fragments completed by {future}. - For each {deferredFragment} of {deferredFragments}: - If {deferredFragment} is in {resultPending}, append {future} to - {releasedFutures}. + {remainingFutures}. - Continue to the next {future} in {pendingFutures}. - Append {future} to {remainingPendingFutures}. - Append {update} to {updates}. From d998aca64a0b172eaca44496eedfe413c598b125 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 10:58:32 +0200 Subject: [PATCH 14/46] fix: change variable name to be consistent we should use newFutures to add a bit more specificity --- spec/Section 6 -- Execution.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 2cc6e8d2c..75fb0153d 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -861,16 +861,17 @@ pending, incrementalDigests, remainingFutures, pendingFutures): - Otherwise: - Append {resultIncrementalDigest} to {supplementalIncrementalDigests}. - If {supplementalIncrementalDigests} is empty: - - Let {newPendingResults}, {futures}, and {deferStates} be the result of + - Let {newPendingResults}, {newFutures}, and {deferStates} be the result of {ProcessIncrementalDigests(incrementalDigests, originalDeferStates)}. - Append all items in {newPendingResults} to {pending}. - Return {deferStates}, {updates}, {pending}, {newFutures}, {remainingFutures}, and {remainingPendingFutures}. -- Let {newPendingResults}, {futures}, and {deferStates} be the results of +- Let {newPendingResults}, {newFutures}, and {deferStates} be the results of {ProcessIncrementalDigests(supplementalIncrementalDigests, deferStates)}. - Append all items in {newPendingResults} to {pending}. -- Return the result of {ProcessCompletedFutures(futures, deferStates, updates, - pending, incrementalDigests, remainingFutures, remainingPendingFutures)}. +- Return the result of {ProcessCompletedFutures(newFutures, deferStates, + updates, pending, incrementalDigests, remainingFutures, + remainingPendingFutures)}. GetUpdatesForStreamItems(streamItems): From b97a920a11fc758d26e64dddea9d7577fc604321 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 9 Jan 2024 11:13:25 +0200 Subject: [PATCH 15/46] fix typos in GetUpdatesForDeferredResult within the section for making sure we don't send incremental entries twice 1. we need to iterate through the incremental entries that are being sent, not the completed entries 2. we need to update the new pending incremental entries correctly. --- spec/Section 6 -- Execution.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 75fb0153d..7b73e6a72 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -927,7 +927,7 @@ GetUpdatesForDeferredResult(originalDeferStates, deferredResult): - Remove the entry for {deferredFragment} on {deferStates}. - Append {deferredFragment} to {completed}. - Append all items in {pending} on {newDeferState} to {incremental}. -- For each {deferredResult} in {completed}: +- For each {deferredResult} in {incremental}: - Let {deferredFragments} be the corresponding entry on {deferredResult}. - For each {deferredFragment} in {deferredFragments}: - Let {deferState} be the entry on {deferStates} for {deferredFragment}. @@ -937,7 +937,8 @@ GetUpdatesForDeferredResult(originalDeferStates, deferredResult): - If {pending} contains {deferredResult}: - Let {newDeferState} be a new unordered map containing all entries on {deferState}. - - Set the entry for {deferredFragment} on {deferStates} to {deferState}. + - Set the entry for {deferredFragment} on {deferStates} to + {newDeferState}. - Let {pending} be a new set containing all of the members of {pending} on {newDeferState}. - Set the corresponding entry on {newDeferState} to {pending}. From a6946661335b9e954e39cc352e84465218dce631 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 10 Jan 2024 14:50:37 +0200 Subject: [PATCH 16/46] only save defer parent rather than all ancestors --- spec/Section 6 -- Execution.md | 29 +++++++++++++---------------- 1 file changed, 13 insertions(+), 16 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 7b73e6a72..353133293 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -675,10 +675,10 @@ ProcessIncrementalDigests(incrementalDigests, originalDeferStates): - Initialize {pending} to an empty list. - For each {newPendingResult} in {newPendingResults}: - If {newPendingResult} is a Deferred Fragment: - - Let {deferState} be the entry in {deferStates} for {newPendingResult}. - - Let {parent} and {parentDeferState} be the result of - {GetParentAndParentDeferState(deferState, deferStates)}. - - If {parent} is not defined: + - Let {parent} be the result of {GetNonEmptyParent(newPendingResult, + deferStates)}. + - Let {parentDeferState} be the entry for {parent} on {deferStates}. + - If {parentDeferState} is not defined: - Append {newPendingResult} to {pending}. - Otherwise: - Let {newParentDeferState} be an unordered map containing all of the @@ -692,14 +692,14 @@ ProcessIncrementalDigests(incrementalDigests, originalDeferStates): - Append {newPendingResult} to {pending}. - Return {pending}, {futures}, and {deferStates}. -GetParentAndParentDeferState(deferState, deferStates): +GetNonEmptyParent(deferredFragment, deferStates): -- Let {ancestors} be the corresponding entry on {deferState}. -- For each {ancestor} of {ancestors}: - - Let {ancestorDeferState} be the entry in {deferStates} for {ancestor}. - - If {ancestorDeferState} is defined, return {ancestor} and - {ancestorDeferState}. -- Return. +- Let {parent} be the corresponding entry on {deferredFragment}. +- If {parent} is not defined, return. +- Let {parentDeferState} be the entry for {parent} on {deferStates}. +- If {parentDeferState} is not defined, return the result of + {GetAncestor(parent, deferStates)}. +- Return {parent}. ### Yielding Subsequent Results @@ -986,11 +986,8 @@ GetNewDeferredFragments(newDeferUsages, path, deferMap): - Let {newDeferMap} be a new unordered map of Defer Usage records to Deferred Fragment records containing all of the entries in {deferMap}. - For each {deferUsage} in {newDeferUsages}: - - Initialize {ancestors} to an empty list. - - Let {deferUsageAncestors} be the result of {GetAncestors(deferUsage)}. - - For each {deferUsageAncestor} of {deferUsageAncestors}: - - Let {ancestor} be the entry in {deferMap} for {deferUsageAncestor}. - - Append {ancestor} to {ancestors}. + - Let {parentDeferUsage} be the corresponding entry on {deferUsage.} + - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. - Let {label} be the corresponding entry on {deferUsage}. - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} and {label}. From 6941ddddfa1149478d64f729c0218f5e71624568 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 10 Jan 2024 16:02:54 +0200 Subject: [PATCH 17/46] remove Incremental Digests concept leaving just futures --- spec/Section 6 -- Execution.md | 258 ++++++++++++++++----------------- 1 file changed, 123 insertions(+), 135 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 353133293..fcd0be531 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -343,13 +343,12 @@ serial): - Let {groupedFieldSet} and {newDeferUsages} be the result of {CollectFields(objectType, selectionSet, variableValues)}. - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. -- Let {data} and {incrementalDigests} be the result of - {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, - variableValues, serial)}. +- Let {data} and {futures} be the result of {ExecuteFieldPlan(newDeferUsages, + fieldPlan, objectType, initialValue, variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. -- Let {newPendingResults}, {futures}, and {deferStates} be the result of - {ProcessIncrementalDigests(incrementalDigests)}. +- Let {pendingResults}, {futures}, and {deferStates} be the result of + {ProcessNewFutures(futures)}. - Let {pending} and {ids} be the result of {GetPending(newPendingResults)}. - If {pending} is empty, return an unordered map consisting of {data} and {errors}. @@ -637,59 +636,58 @@ IsSameSet(setA, setB): - Return {false}. - Return {true}. -### Processing Incremental Digests +### Processing New Futures -An Incremental Digest is a structure containing: +Futures must be processed carefully because pending results must be delivered to +the client in the appropriate order. In particular, nested deferred fragments +may complete in any order, but the results of those fragments must be delivered +to the client in the order in which they were specified in the operation. The +{ProcessNewFutures()} algorithm manages the tree that maintains the correct +delivery order. -- {newPendingResults}: a list of new pending results to publish. -- {futures}: a list of future executions whose results will complete pending - results. The results of these future execution may immediately complete the - pending results, or may incrementally complete the pending results, and - contain additional Incremental Digests that will immediately or eventually - complete those results. - -Incremental digests must be processed carefully because pending results must be -delivered to the client in the appropriate order. In particular, nested deferred -fragments may complete in any order, and the results of those fragments must be -delivered to the client in the order in which they were specified in the -operation. The {ProcessIncrementalDigests()} algorithm manages the tree that -maintains the correct delivery order. - -ProcessIncrementalDigests(incrementalDigests, originalDeferStates): +ProcessNewFutures(futures, originalDeferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. -- Let {newPendingResults} and {futures} be lists containing all of the items - from the corresponding lists within each item of {incrementalDigests}. -- For each {future} in {futures}: - - Let {deferredFragments} be the list of deferred fragments completed by - {future}. - - For each {deferredFragment} of {deferredFragments}: - - Let {deferState} be the entry in {deferStates} for {deferredFragment}. - - Let {newDeferState} be a new unordered map containing all of the entries - in {deferState}. - - Let {count} be the corresponding entry on {newDeferState} for - {deferredFragment}. - - Let {newCount} be {count} + 1 if {count} is defined, otherwise {0}. - - Set the entry for {deferredFragment} in {deferStates} to {deferState}. - Initialize {pending} to an empty list. -- For each {newPendingResult} in {newPendingResults}: - - If {newPendingResult} is a Deferred Fragment: - - Let {parent} be the result of {GetNonEmptyParent(newPendingResult, - deferStates)}. - - Let {parentDeferState} be the entry for {parent} on {deferStates}. - - If {parentDeferState} is not defined: - - Append {newPendingResult} to {pending}. - - Otherwise: - - Let {newParentDeferState} be an unordered map containing all of the - entries on {parentDeferState}. - - Let {children} be a new list containing all of the entries on {children} - on {newParentDeferState}. - - Append {newPendingResult} to {children}. - - Set the corresponding entry on {newParentDeferState} to {children}. - - Set the entry for {parent} in {deferStates} to {newDeferState}. +- Initialize {collectedDeferredFragments} to the empty set. +- For each {future} in {futures}: + - If {future} will incrementally complete a Stream: + - Let {stream} be that Stream. + - Append {stream} to {pendingResults}. - Otherwise: - - Append {newPendingResult} to {pending}. + - Let {deferredFragments} be the list of deferred fragments completed by + {future}. + - For each {deferredFragment} of {deferredFragments}: + - Add {deferredFragment} to {collectedDeferredFragments}. + - Let {deferState} be the entry in {deferStates} for {deferredFragment}. + - If {deferState} is not defined: + - Let {count} be {0}. + - Let {deferState} be a new unordered map containing {count}. + - Set the entry for {deferredFragment} in {deferStates} to {deferState}. + - Otherwise: + - Let {newDeferState} be a new unordered map containing all of the + entries in {deferState}. + - Let {count} be the corresponding entry on {newDeferState} for + {deferredFragment}. + - Let {newCount} be {count} + 1. + - Set the entry for {count} on {newDeferState} to {newCount}. + - Set the entry for {deferredFragment} in {deferStates} to + {newDeferState}. +- For each {deferredFragment} in {collectedDeferredFragments}: + - Let {parent} be the result of {GetNonEmptyParent(deferredFragment, + deferStates)}. + - Let {parentDeferState} be the entry for {parent} on {deferStates}. + - If {parentDeferState} is not defined: + - Append {deferredFragment} to {pending}. + - Otherwise: + - Let {newParentDeferState} be an unordered map containing all of the + entries on {parentDeferState}. + - Let {children} be a new list containing all of the entries on {children} + on {newParentDeferState}. + - Append {newPendingResult} to {children}. + - Set the corresponding entry on {newParentDeferState} to {children}. + - Set the entry for {parent} in {deferStates} to {newDeferState}. - Return {pending}, {futures}, and {deferStates}. GetNonEmptyParent(deferredFragment, deferStates): @@ -722,26 +720,28 @@ initiatedFutures, pendingFutures): - Append {future} to {pendingFutures}. - Wait for any future execution contained in {maybeCompletedFutures} to complete. -- Let {deferStates}, {updates}, {newPendingResults}, {newestFutures}, +- Let {deferStates}, {pendingResults}, {updates}, {newestFutures}, {remainingFutures}, and {pendingFutures} be the result of {ProcessCompletedFutures(futures, originalDeferStates)}. - Let {ids} and {payload} be the result of - {GetIncrementalPayload(newPendingResults, originalIds, updates)}. + {GetIncrementalPayload(pendingResults, originalIds, updates)}. - If {hasNext} is not the only entry on {payload}, yield {payload}. - If {hasNext} on {payload} is {false}: - Complete this subsequent result stream and return. - Yield the results of {YieldSubsequentResults(ids, deferStates, newestFutures, remainingFutures, pendingFutures)}. -GetIncrementalPayload(newPendingResults, originalIds, updates): +GetIncrementalPayload(pendingResults, originalIds, updates): - Let {ids} be a new unordered map containing all of the entries in {originalIds}. - Initialize {pending}, {incremental}, and {completed} to empty lists. -- For each {newPendingResult} in {newPendingResults}: +- For each {pendingResult} in {pendingResults}: + - If an entry for {pendingResult} exists in {ids}, continue to the next + {pendingResult} in {pendingResults}. - Let {path} and {label} be the corresponding entries on {newPendingResult}. - Let {id} be a unique identifier for this {newPendingResult}. - - Set the entry for {newPendingResult} in {ids} to {id}. + - Set the entry for {pendingResult} in {ids} to {id}. - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. - Append {pendingEntry} to {pending}. - For each {update} of {updates}: @@ -818,33 +818,30 @@ possibly: - Completing existing pending results. - Contributing data for the next payload. -- Containing additional Incremental Digests. +- Containing additional futures. -When encountering additional Incremental Digests, {ProcessCompletedFutures()} -calls itself recursively, processing the new Incremental Digests and checking -for any completed futures, as long as the new Incremental Digests do not contain -any new pending results. If they do, first a new payload is yielded, notifying -the client that new pending results have been encountered. +When encountering completed futures, {ProcessCompletedFutures()} calls itself +recursively on any futures for existing Deferred Fragments. -ProcessCompletedFutures(maybeCompletedFutures, originalDeferStates, updates, -pending, incrementalDigests, remainingFutures, pendingFutures): +ProcessCompletedFutures(maybeCompletedFutures, originalDeferStates, pending, +updates, futures, remainingFutures, pendingFutures): -- If {updates}, {pending}, {incrementalDigests}, {remainingFutures}, or - {pendingFutures} are not provided, initialize them to empty lists. +- If {pending} is not provided, initialize it to the empty set. +- If {updates}, {futures}, {remainingFutures}, or {pendingFutures} are not + provided, initialize them to empty lists. - Let {completedFutures} be a list containing all completed futures from {maybeCompletedFutures}; append the remaining futures to {remainingFutures}. - Let {deferStates} be {originalDeferStates}. -- Initialize {supplementalIncrementalDigests} to an empty list. +- Initialize {supplementalFutures} to an empty list. - For each {completedFuture} in {completedFutures}: - Let {result} be the result of {completedFuture}. - If {result} represents completion of Stream Items: - - Let {update} and {resultIncrementalDigests} be the result of calling + - Let {update} and {resultFutures} be the result of calling {GetUpdatesForStreamItems(result)}. - Let {remainingPendingFutures} be {pendingFutures}. - Otherwise: - - Let {deferStates}, {update}, {resultPending}, and - {resultIncrementalDigests} be the result of calling - {GetUpdatesForDeferredResult(deferStates, result)}. + - Let {deferStates}, {update}, {resultPending}, and {resultFutures} be the + result of calling {GetUpdatesForDeferredResult(deferStates, result)}. - Append all items in {resultPending} to {pending}. - Initialize {remainingPendingFutures} to empty lists. - For each {future} in {pendingFutures}: @@ -855,23 +852,27 @@ pending, incrementalDigests, remainingFutures, pendingFutures): - Continue to the next {future} in {pendingFutures}. - Append {future} to {remainingPendingFutures}. - Append {update} to {updates}. - - For each {resultIncrementalDigest} in {resultIncrementalDigests}: - - If {resultIncrementalDigest} contains a {newPendingResults} entry: - - Append {resultIncrementalDigest} to {incrementalDigests}. + - For each {resultFuture} in {resultFutures}: + - Let {deferredFragments} be the Deferred Fragments completed by + {resultFuture}. + - For each {deferredFragment} of {deferredFragments}: + - Let {deferState} be the entry on {deferStates} for {deferredFragment}. + - If {deferState} is defined: + - Append {resultFuture} to {supplementalFutures}. + - Continue to the next {resultFuture} in {resultFutures}. - Otherwise: - - Append {resultIncrementalDigest} to {supplementalIncrementalDigests}. -- If {supplementalIncrementalDigests} is empty: - - Let {newPendingResults}, {newFutures}, and {deferStates} be the result of - {ProcessIncrementalDigests(incrementalDigests, originalDeferStates)}. - - Append all items in {newPendingResults} to {pending}. - - Return {deferStates}, {updates}, {pending}, {newFutures}, + - Append {resultFuture} to {futures}. +- If {supplementalFutures} is empty: + - Let {pendingResults}, {newFutures}, and {deferStates} be the result of + {ProcessNewFutures(futures, originalDeferStates)}. + - Add all items in {pendingResults} to {pending}. + - Return {deferStates}, {pending}, {updates}, {newFutures}, {remainingFutures}, and {remainingPendingFutures}. -- Let {newPendingResults}, {newFutures}, and {deferStates} be the results of - {ProcessIncrementalDigests(supplementalIncrementalDigests, deferStates)}. -- Append all items in {newPendingResults} to {pending}. +- Let {pendingResults}, {newFutures}, and {deferStates} be the results of + {ProcessNewFutures(supplementalFutures, deferStates)}. +- Add all items in {pendingResults} to {pending}. - Return the result of {ProcessCompletedFutures(newFutures, deferStates, - updates, pending, incrementalDigests, remainingFutures, - remainingPendingFutures)}. + pending, updates, futures, remainingFutures, remainingPendingFutures)}. GetUpdatesForStreamItems(streamItems): @@ -887,14 +888,14 @@ GetUpdatesForStreamItems(streamItems): - Otherwise: - Let {incremental} be a list containing {streamItems}. - Let {update} be an unordered map containing {incremental}. - - Let {incrementalDigests} be the corresponding entry on {streamItems}. -- Return {update} and {incrementalDigests}. + - Let {futures} be the corresponding entry on {streamItems}. +- Return {update} and {futures}. GetUpdatesForDeferredResult(originalDeferStates, deferredResult): - Let {deferStates} be a new unordered map containing all of the entries in {originalDeferStates}. -- Initialize {incrementalDigests} to an empty list. +- Initialize {futures} to an empty list. - Let {deferredFragments}, {data}, and {errors} be the corresponding entries on {deferredResult}. - Initialize {completed} to an empty list. @@ -906,7 +907,7 @@ GetUpdatesForDeferredResult(originalDeferStates, deferredResult): - Remove the entry for {deferredFragment} on {completed}. - Append {deferredFragment} to {completed}. - Let {update} be an unordered map containing {completed} and {errors}. - - Return {update} and {incrementalDigests}. + - Return {update} and {futures}. - Initialize {incremental} to an empty list. - Initialize {newPending} to the empty set. - For each {deferredFragment} of {deferredFragments}: @@ -944,7 +945,7 @@ GetUpdatesForDeferredResult(originalDeferStates, deferredResult): - Set the corresponding entry on {newDeferState} to {pending}. - Remove {deferredResult} from {pending}. - Let {update} be an unordered map containing {incremental} and {completed}. -- Return {deferStates}, {update}, {newPending}, and {incrementalDigests}. +- Return {deferStates}, {update}, {newPending}, and {futures}. ## Executing a Field Plan @@ -959,30 +960,27 @@ variableValues, serial, path, deferUsageSet, deferMap): - Let {groupedFieldSet}, {newGroupedFieldSets}, {newDeferUsages}, and {newGroupedFieldSetsRequiringDeferral} be the corresponding entries on {fieldPlan}. -- Let {newDeferMap} and {newPendingResults} be the result of - {GetNewDeferredFragments(newDeferUsages, path, deferMap)}. +- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, + deferMap)}. - Allowing for parallelization, perform the following steps: - - Let {data} and {nestedIncrementalDigests} be the result of running + - Let {data} and {nestedFutures} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - - Let {incrementalDigest} be the result of - {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, - newGroupedFieldSets, false, path, newDeferMap)}. - - Let {deferredIncrementalDigest} be the result of + - Let {futures} be the result of {ExecuteDeferredGroupedFieldSets(objectType, + objectValue, variableValues, newGroupedFieldSets, false, path, + newDeferMap)}. + - Let {deferredFutures} be the result of {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, newGroupedFieldSetsRequiringDeferral, true, path, newDeferMap)}. -- Set the corresponding entry on {deferredIncrementalDigest} to - {newPendingResults}. -- Let {incrementalDigests} be a list containing {incrementalDigest}, - {deferredIncrementalDigest}, and all items in {nestedIncrementalDigests}. -- Return {data} and {incrementalDigests}. +- Let {futures} be a list containing {future}, {deferredFutures}, and all items + in {nestedFutures}. +- Return {data} and {futures}. -GetNewDeferredFragments(newDeferUsages, path, deferMap): +GetNewDeferMap(newDeferUsages, path, deferMap): -- Initialize {newDeferredFragments} to an empty list. - If {newDeferUsages} is empty: - - Return {deferMap} and {newDeferredFragments}. + - Return {deferMap}. - Let {newDeferMap} be a new unordered map of Defer Usage records to Deferred Fragment records containing all of the entries in {deferMap}. - For each {deferUsage} in {newDeferUsages}: @@ -992,8 +990,7 @@ GetNewDeferredFragments(newDeferUsages, path, deferMap): - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} and {label}. - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - - Append {newDeferredFragment} to {newDeferredFragments}. -- Return {newDeferMap} and {newDeferredFragments}. +- Return {newDeferMap}. ## Executing a Grouped Field Set @@ -1008,20 +1005,19 @@ ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. -- Initialize {incrementalDigests} to an empty list. +- Initialize {futures} to an empty list. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} and {fieldIncrementalDigests} be the result of + - Let {responseValue} and {fieldFutures} be the result of {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - - Append all Incremental Digests in {fieldIncrementalDigests} to - {incrementalDigests}. -- Return {resultMap} and {incrementalDigests}. + - Append all futures in {fieldFutures} to {futures}. +- Return {resultMap} and {futures}. Note: {resultMap} is ordered by which fields appear first in the operation. This is explained in greater detail in the Field Collection section above. @@ -1190,20 +1186,19 @@ newGroupedFieldSets, shouldInitiateDefer, path, deferMap): - Following any implementation specific deferral of further execution, initiate {future}. - Append {future} to {futures}. -- Let {incrementalDigest} be a new Incremental Digest created from {futures}. -- Return {incrementalDigest}. +- Return {futures}. ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): -- Let {data} and {incrementalDigests} be the result of running +- Let {data} and {futures} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing parallelization). - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. - Let {deferredResult} be an unordered map containing {path}, - {deferredFragments}, {data}, {errors}, and {incrementalDigests}. + {deferredFragments}, {data}, {errors}, and {futures}. - Return {deferredResult}. ## Executing Fields @@ -1331,14 +1326,14 @@ deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} and {incrementalDigests} be the result of calling + - Let {completedResult} and {futures} be the result of calling {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult} and {incrementalDigests}. + - Return {completedResult} and {futures}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - - Initialize {incrementalDigests} to an empty list. + - Initialize {futures} to an empty list. - If {result} is not a collection of values, raise a _field error_. - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. @@ -1368,22 +1363,17 @@ deferUsageSet, deferMap): - If early execution of streamed fields is desired: - Following any implementation specific deferral of further execution, initiate {future}. - - Let {incrementalDigest} be a new Incremental Digest created from - {stream} and {future}. - - Append {incrementalDigest} to {incrementalDigests}. - - Return {items} and {incrementalDigests}. + - Append {future} to {futures}. - Otherwise: - Wait for the next item from {result} via the {iterator}. - If an item is not retrieved because of an error, raise a _field error_. - Let {item} be the item retrieved from {result}. - Let {itemPath} be {path} with {index} appended. - - Let {completedItem} and {itemIncrementalDigests} be the result of - calling {CompleteValue(innerType, fields, item, variableValues, - itemPath)}. + - Let {completedItem} and {itemFutures} be the result of calling + {CompleteValue(innerType, fields, item, variableValues, itemPath)}. - Append {completedItem} to {items}. - - Append all Incremental Digests in {itemIncrementalDigests} to - {incrementalDigests}. - - Return {items} and {incrementalDigests}. + - Append all futures in {itemFutures} to {futures}. + - Return {items} and {futures}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -1418,7 +1408,7 @@ variableValues): - If {iterator} is closed, return. - Let {item} be the next item retrieved via {iterator}. - Let {nextIndex} be {index} plus one. -- Let {completedItem} and {itemIncrementalDigests} be the result of +- Let {completedItem} and {itemFutures} be the result of {CompleteValue(innerType, fields, item, variableValues, itemPath)}. - Initialize {items} to an empty list. - Append {completedItem} to {items}. @@ -1429,12 +1419,10 @@ variableValues): - If early execution of streamed fields is desired: - Following any implementation specific deferral of further execution, initiate {future}. -- Let {incrementalDigest} be a new Incremental Digest created from {future}. -- Initialize {incrementalDigests} to a list containing {incrementalDigest}. -- Append all Incremental Digests in {itemIncrementalDigests} to - {incrementalDigests}. +- Initialize {futures} to a list containing {future}. +- Append all futures in {itemFutures} to {futures}. - Let {streamedItems} be an unordered map containing {stream}, {items} {errors}, - and {incrementalDigests}. + and {futures}. - Return {streamedItem}. **Coercing Results** From aa6e3dfcd49a7c73ff219ca3c576534def4a9e40 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Wed, 10 Jan 2024 16:30:18 +0200 Subject: [PATCH 18/46] if nested defers are completed, keep processing them --- spec/Section 6 -- Execution.md | 53 +++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 24 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index fcd0be531..16924586e 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -347,7 +347,7 @@ serial): fieldPlan, objectType, initialValue, variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. -- Let {pendingResults}, {futures}, and {deferStates} be the result of +- Let {pendingResults} and {deferStates} be the result of {ProcessNewFutures(futures)}. - Let {pending} and {ids} be the result of {GetPending(newPendingResults)}. - If {pending} is empty, return an unordered map consisting of {data} and @@ -688,7 +688,7 @@ ProcessNewFutures(futures, originalDeferStates): - Append {newPendingResult} to {children}. - Set the corresponding entry on {newParentDeferState} to {children}. - Set the entry for {parent} in {deferStates} to {newDeferState}. -- Return {pending}, {futures}, and {deferStates}. +- Return {pending} and {deferStates}. GetNonEmptyParent(deferredFragment, deferStates): @@ -720,15 +720,18 @@ initiatedFutures, pendingFutures): - Append {future} to {pendingFutures}. - Wait for any future execution contained in {maybeCompletedFutures} to complete. -- Let {deferStates}, {pendingResults}, {updates}, {newestFutures}, - {remainingFutures}, and {pendingFutures} be the result of - {ProcessCompletedFutures(futures, originalDeferStates)}. +- Let {completedFutures} be a list containing all completed futures from + {maybeCompletedFutures}; let the remaining futures be {remainingFutures}. +- Let {deferStates}, {pendingResults}, {updates}, {newFutures}, + {supplementalFutures} and {pendingFutures} be the result of + {ProcessCompletedFutures(completedFutures, originalDeferStates)}. +- Append all futures in {supplementalFutures} to {remainingFutures}. - Let {ids} and {payload} be the result of {GetIncrementalPayload(pendingResults, originalIds, updates)}. - If {hasNext} is not the only entry on {payload}, yield {payload}. - If {hasNext} on {payload} is {false}: - Complete this subsequent result stream and return. -- Yield the results of {YieldSubsequentResults(ids, deferStates, newestFutures, +- Yield the results of {YieldSubsequentResults(ids, deferStates, newFutures, remainingFutures, pendingFutures)}. GetIncrementalPayload(pendingResults, originalIds, updates): @@ -821,18 +824,17 @@ possibly: - Containing additional futures. When encountering completed futures, {ProcessCompletedFutures()} calls itself -recursively on any futures for existing Deferred Fragments. +recursively on any new futures in case they have been completed. -ProcessCompletedFutures(maybeCompletedFutures, originalDeferStates, pending, -updates, futures, remainingFutures, pendingFutures): +ProcessCompletedFutures(completedFutures, originalDeferStates, pending, updates, +newFutures, supplementalFutures, pendingFutures): - If {pending} is not provided, initialize it to the empty set. -- If {updates}, {futures}, {remainingFutures}, or {pendingFutures} are not +- If {updates}, {newFutures}, {supplementalFutures}, or {pendingFutures} are not provided, initialize them to empty lists. -- Let {completedFutures} be a list containing all completed futures from - {maybeCompletedFutures}; append the remaining futures to {remainingFutures}. - Let {deferStates} be {originalDeferStates}. -- Initialize {supplementalFutures} to an empty list. +- Initialize {maybeCompletedNewFutures} and {maybeCompletedSupplementalFutures} + to empty lists. - For each {completedFuture} in {completedFutures}: - Let {result} be the result of {completedFuture}. - If {result} represents completion of Stream Items: @@ -843,12 +845,12 @@ updates, futures, remainingFutures, pendingFutures): - Let {deferStates}, {update}, {resultPending}, and {resultFutures} be the result of calling {GetUpdatesForDeferredResult(deferStates, result)}. - Append all items in {resultPending} to {pending}. - - Initialize {remainingPendingFutures} to empty lists. + - Initialize {remainingPendingFutures} an empty list. - For each {future} in {pendingFutures}: - Let {deferredFragments} be the Deferred Fragments completed by {future}. - For each {deferredFragment} of {deferredFragments}: - If {deferredFragment} is in {resultPending}, append {future} to - {remainingFutures}. + {maybeCompletedNewFutures}. - Continue to the next {future} in {pendingFutures}. - Append {future} to {remainingPendingFutures}. - Append {update} to {updates}. @@ -858,21 +860,24 @@ updates, futures, remainingFutures, pendingFutures): - For each {deferredFragment} of {deferredFragments}: - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - If {deferState} is defined: - - Append {resultFuture} to {supplementalFutures}. + - Append {resultFuture} to {maybeCompletedSupplementalFutures}. - Continue to the next {resultFuture} in {resultFutures}. - Otherwise: - - Append {resultFuture} to {futures}. -- If {supplementalFutures} is empty: - - Let {pendingResults}, {newFutures}, and {deferStates} be the result of - {ProcessNewFutures(futures, originalDeferStates)}. + - Append {resultFuture} to {maybeCompletedNewFutures}. +- Let {completedFutures} be a list containing all completed futures from + {maybeCompletedNewFutures} and {maybeCompletedSupplementalFutures}; append the + remaining futures to {newFutures} and {supplementalFutures}, respectively. +- If {completedFutures} is empty: + - Let {pendingResults} and {deferStates} be the result of + {ProcessNewFutures(newFutures, originalDeferStates)}. - Add all items in {pendingResults} to {pending}. - - Return {deferStates}, {pending}, {updates}, {newFutures}, - {remainingFutures}, and {remainingPendingFutures}. -- Let {pendingResults}, {newFutures}, and {deferStates} be the results of + - Return {deferStates}, {pending}, {updates}, {newFutures}, and + {supplementalFutures}, {remainingPendingFutures}. +- Let {pendingResults} and {deferStates} be the results of {ProcessNewFutures(supplementalFutures, deferStates)}. - Add all items in {pendingResults} to {pending}. - Return the result of {ProcessCompletedFutures(newFutures, deferStates, - pending, updates, futures, remainingFutures, remainingPendingFutures)}. + pending, updates, newFutures, supplementalFutures, remainingPendingFutures)}. GetUpdatesForStreamItems(streamItems): From 21ef532a788fcf9c08cfa41a249bbf74c1052a2c Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 15 Jan 2024 12:46:07 +0200 Subject: [PATCH 19/46] fix how new defer usages are collected previously, we mutated a list (along with some other bugs) --- spec/Section 6 -- Execution.md | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 16924586e..44ad7e73f 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -419,11 +419,11 @@ is maintained through execution, ensuring that fields appear in the executed response in a stable and predictable order. CollectFields(objectType, selectionSet, variableValues, deferUsage, -newDeferUsages, visitedFragments): +visitedFragments): - If {visitedFragments} is not provided, initialize it to the empty set. -- If {newDeferUsages} is not provided, initialize it to the empty set. - Initialize {groupedFields} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. - For each {selection} in {selectionSet}: - If {selection} provides the directive `@skip`, let {skipDirective} be that directive. @@ -468,18 +468,19 @@ newDeferUsages, visitedFragments): argument. - Let {fragmentDeferUsage} be a new Defer Usage record created from {label} and {deferUsage}. - - Add {fragmentDeferUsage} to {newDeferUsages}. + - Append {fragmentDeferUsage} to {newDeferUsages}. - Otherwise: - Let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - fragmentDeferUsage, newDeferUsages, visitedFragments)}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - If {selection} is an {InlineFragment}: - Let {fragmentType} be the type condition on {selection}. - If {fragmentType} is not {null} and {DoesFragmentTypeApply(objectType, @@ -500,15 +501,16 @@ newDeferUsages, visitedFragments): - Add {fragmentDeferUsage} to {newDeferUsages}. - Otherwise: - Let {fragmentDeferUsage} be {deferUsage}. - - Let {fragmentGroupedFieldSet} be the result of calling - {CollectFields(objectType, fragmentSelectionSet, variableValues, - fragmentDeferUsage, newDeferUsages, visitedFragments)}. + - Let {fragmentGroupedFieldSet} and {fragmentNewDeferUsages} be the result + of calling {CollectFields(objectType, fragmentSelectionSet, + variableValues, fragmentDeferUsage, visitedFragments)}. - For each {fragmentGroup} in {fragmentGroupedFieldSet}: - Let {responseKey} be the response key shared by all fields in {fragmentGroup}. - Let {groupForResponseKey} be the list in {groupedFields} for {responseKey}; if no such list exists, create it as an empty list. - Append all items in {fragmentGroup} to {groupForResponseKey}. + - Append all items in {fragmentNewDeferUsages} to {newDeferUsages}. - Return {groupedFields} and {newDeferUsages}. DoesFragmentTypeApply(objectType, fragmentType): @@ -1496,17 +1498,19 @@ After resolving the value for `me`, the selection sets are merged together so CollectSubfields(objectType, fieldDetailsList, variableValues): -- Let {groupedFieldSet} be an empty map. +- Initialize {groupedFieldSet} to an empty ordered map of lists. +- Initialize {newDeferUsages} to an empty list. - For each {fieldDetails} in {fieldDetailsList}: - Let {field} and {deferUsage} be the corresponding entries on {fieldDetails}. - Let {fieldSelectionSet} be the selection set of {field}. - If {fieldSelectionSet} is null or empty, continue to the next field. - - Let {subGroupedFieldSet} be the result of {CollectFields(objectType, - fieldSelectionSet, variableValues, deferUsage)}. + - Let {subGroupedFieldSet} and {subNewDeferUsages} be the result of + {CollectFields(objectType, fieldSelectionSet, variableValues, deferUsage)}. - For each {subGroupedFieldSet} as {responseKey} and {subfields}: - Let {groupForResponseKey} be the list in {groupedFieldSet} for {responseKey}; if no such list exists, create it as an empty list. - Append all fields in {subfields} to {groupForResponseKey}. + - Append all defer usages in {subNewDeferUsages} to {newDeferUsages}. - Return {groupedFieldSet}. ### Handling Field Errors From 74f2f579b27733500920538eb392a7ce1c61220f Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Thu, 18 Jan 2024 14:08:38 +0200 Subject: [PATCH 20/46] rewrite --- spec/Section 6 -- Execution.md | 747 ++++++++++++++++++--------------- 1 file changed, 404 insertions(+), 343 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 44ad7e73f..924703638 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -329,47 +329,39 @@ mutations (serial), and subscriptions (where it is executed for each event in the underlying Source Stream). First, the selection set is turned into a field plan; then, we execute this -field plan and return the resulting {data} and {errors}. - -If an operation contains `@defer` or `@stream` directives, execution may also -result in an Subsequent Result stream in addition to the initial response. The -procedure for yielding subsequent results is specified by the -{YieldSubsequentResults()} algorithm. +field plan, which may yield one or more incremental results, as specified by the +{YieldIncrementalResults()} algorithm. If an operation contains `@defer` or +`@stream` directives, we return the Subsequent Result stream in addition to the +initial response. Otherwise, we return just the initial result. ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): +- Let {future} be the future result of {ExecuteInitialResult(variableValues, + initialValue, objectType, selectionSet, serial)}. +- Let {futures} be a list containing {future}. +- Let {incrementalResults} be the result of {YieldIncrementalResults(futures)}. +- Wait for the first result in {incrementalResults} to be available. +- Let {initialResult} be that result. +- If {hasNext} on {initialResult} is not {true}: + - Return {initialResult}. +- Return {initialResult} and {incrementalResults}. + +ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, +serial): + - If {serial} is not provided, initialize it to {false}. - Let {groupedFieldSet} and {newDeferUsages} be the result of {CollectFields(objectType, selectionSet, variableValues)}. - Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. -- Let {data} and {futures} be the result of {ExecuteFieldPlan(newDeferUsages, - fieldPlan, objectType, initialValue, variableValues, serial)}. +- Let {data}, {newPendingResults}, and {futures} be the result of + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. - Let {errors} be the list of all _field error_ raised while executing the {groupedFieldSet}. -- Let {pendingResults} and {deferStates} be the result of - {ProcessNewFutures(futures)}. -- Let {pending} and {ids} be the result of {GetPending(newPendingResults)}. -- If {pending} is empty, return an unordered map consisting of {data} and - {errors}. -- Let {hasNext} be {true}. - Let {initialResult} be an unordered map consisting of {data}, {errors}, - {pending}, and {hasNext}. -- Let {subsequentResults} be the result of {YieldSubsequentResults(ids, - deferStates, futures)}. -- Return {initialResult} and {subsequentResults}. - -GetPending(newPendingResults): - -- Initialize {pending} to an empty list. -- Initialize {ids} to a new unordered map of pending results to identifiers. -- For each {newPendingResult} in {newPendingResults}: - - Let {path} and {label} be the corresponding entries on {newPendingResult}. - - Let {id} be a unique identifier for this {newPendingResult}. - - Set the entry for {newPendingResult} in {ids} to {id}. - - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. - - Append {pendingEntry} to {pending}. -- Return {pending} and {ids}. + {newPendingResults}, and {futures}. +- Return {initialResult}. ### Field Collection @@ -638,157 +630,104 @@ IsSameSet(setA, setB): - Return {false}. - Return {true}. -### Processing New Futures - -Futures must be processed carefully because pending results must be delivered to -the client in the appropriate order. In particular, nested deferred fragments -may complete in any order, but the results of those fragments must be delivered -to the client in the order in which they were specified in the operation. The -{ProcessNewFutures()} algorithm manages the tree that maintains the correct -delivery order. +### Yielding Incremental Results -ProcessNewFutures(futures, originalDeferStates): +The procedure for yielding incremental results is specified by the +{YieldIncrementalResults()} algorithm. First, any uninitiated executions are +initiated. Then, any completed deferred or streamed results are processed to +determine the payload to be yielded. Finally, if any pending results remain, the +procedure is repeated recursively. -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- Initialize {pending} to an empty list. -- Initialize {collectedDeferredFragments} to the empty set. -- For each {future} in {futures}: - - If {future} will incrementally complete a Stream: - - Let {stream} be that Stream. - - Append {stream} to {pendingResults}. - - Otherwise: - - Let {deferredFragments} be the list of deferred fragments completed by - {future}. - - For each {deferredFragment} of {deferredFragments}: - - Add {deferredFragment} to {collectedDeferredFragments}. - - Let {deferState} be the entry in {deferStates} for {deferredFragment}. - - If {deferState} is not defined: - - Let {count} be {0}. - - Let {deferState} be a new unordered map containing {count}. - - Set the entry for {deferredFragment} in {deferStates} to {deferState}. - - Otherwise: - - Let {newDeferState} be a new unordered map containing all of the - entries in {deferState}. - - Let {count} be the corresponding entry on {newDeferState} for - {deferredFragment}. - - Let {newCount} be {count} + 1. - - Set the entry for {count} on {newDeferState} to {newCount}. - - Set the entry for {deferredFragment} in {deferStates} to - {newDeferState}. -- For each {deferredFragment} in {collectedDeferredFragments}: - - Let {parent} be the result of {GetNonEmptyParent(deferredFragment, - deferStates)}. - - Let {parentDeferState} be the entry for {parent} on {deferStates}. - - If {parentDeferState} is not defined: - - Append {deferredFragment} to {pending}. - - Otherwise: - - Let {newParentDeferState} be an unordered map containing all of the - entries on {parentDeferState}. - - Let {children} be a new list containing all of the entries on {children} - on {newParentDeferState}. - - Append {newPendingResult} to {children}. - - Set the corresponding entry on {newParentDeferState} to {children}. - - Set the entry for {parent} in {deferStates} to {newDeferState}. -- Return {pending} and {deferStates}. +YieldIncrementalResults(newFutures, originalIds, originalDeferStates, +originalRemainingFutures): -GetNonEmptyParent(deferredFragment, deferStates): - -- Let {parent} be the corresponding entry on {deferredFragment}. -- If {parent} is not defined, return. -- Let {parentDeferState} be the entry for {parent} on {deferStates}. -- If {parentDeferState} is not defined, return the result of - {GetAncestor(parent, deferStates)}. -- Return {parent}. - -### Yielding Subsequent Results - -The procedure for yielding subsequent results is specified by the -{YieldSubsequentResults()} algorithm. First, any initiated future executions are -initiated. Then, any completed future executions are processed to determine the -payload to be yielded. Finally, if any pending results remain, the procedure is -repeated recursively. - -YieldSubsequentResults(originalIds, originalDeferStates, newFutures, -initiatedFutures, pendingFutures): - -- Initialize {futures} to a list containing all items in {initiatedFutures}. -- If {pendingFutures} is not provided, initialize it to an empty list. +- Let {maybeCompletedFutures} be a new set containing all members of + {originalRemainingFutures}. - For each {future} in {newFutures}: - - If {future} contributes to a pending result that has been sent: - - If {future} has not been initiated, initiate it. - - Append {future} to {futures}. - - Otherwise: - - Append {future} to {pendingFutures}. -- Wait for any future execution contained in {maybeCompletedFutures} to - complete. -- Let {completedFutures} be a list containing all completed futures from - {maybeCompletedFutures}; let the remaining futures be {remainingFutures}. -- Let {deferStates}, {pendingResults}, {updates}, {newFutures}, - {supplementalFutures} and {pendingFutures} be the result of + - If {future} is not initiated, initiate it. + - Add {future} to {maybeCompletedFutures}. +- Wait for any futures within {maybeCompletedFutures} to complete. +- Let {completedFutures} be the completed futures; let {remainingFutures} be the + remaining futures. +- Let {update}, {newestFutures}, and {deferStates} be the result of {ProcessCompletedFutures(completedFutures, originalDeferStates)}. -- Append all futures in {supplementalFutures} to {remainingFutures}. -- Let {ids} and {payload} be the result of - {GetIncrementalPayload(pendingResults, originalIds, updates)}. -- If {hasNext} is not the only entry on {payload}, yield {payload}. -- If {hasNext} on {payload} is {false}: - - Complete this subsequent result stream and return. -- Yield the results of {YieldSubsequentResults(ids, deferStates, newFutures, - remainingFutures, pendingFutures)}. +- If {data} is defined on {update}: + - Let {ids} and {payload} be the result of {GetInitialPayload(update)}. + - Yield {payload}. + - If {hasNext} on {payload} is not {true}, complete this incremental result + stream and return. +- Otherwise: + - Let {ids} and {payload} be the result of {GetSubsequentPayload(pending, + originalIds, update)}. + - If {hasNext} is not the only entry on {payload}, yield {payload}. + - If {hasNext} on {payload} is {false}, complete this incremental result + stream and return. +- Yield the results of {YieldIncrementalResults(newestFutures, ids, deferStates, + remainingFutures)}. -GetIncrementalPayload(pendingResults, originalIds, updates): +GetInitialPayload(update): + +- Let {ids} be a new unordered map. +- Initialize {pending} to an empty list. +- For each {newPendingResult} in {pending} on {update}: + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {id} be a unique identifier for this {newPendingResult}. + - Set the entry for {newPendingResult} in {ids} to {id}. + - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. + - Append {pendingEntry} to {pending}. +- Let {data} and {errors} be the corresponding entries on {initialResult}. +- Let {payload} be an unordered map containing {data} and {errors}. +- If {data} is {null}, return {ids} and {payload}. +- If {pending} is not empty: + - Set the corresponding entry on {payload} to {pending}. + - Set the entry for {hasNext} on {payload} to {true}. +- Return {ids} and {payload}. + +GetSubsequentPayload(newPendingResults, originalIds, update): - Let {ids} be a new unordered map containing all of the entries in {originalIds}. -- Initialize {pending}, {incremental}, and {completed} to empty lists. -- For each {pendingResult} in {pendingResults}: - - If an entry for {pendingResult} exists in {ids}, continue to the next - {pendingResult} in {pendingResults}. +- Initialize {pending}, {incremental} and {completed} to empty lists. +- For each {newPendingResult} in {pending} on {update}: - Let {path} and {label} be the corresponding entries on {newPendingResult}. - Let {id} be a unique identifier for this {newPendingResult}. - - Set the entry for {pendingResult} in {ids} to {id}. + - Set the entry for {newPendingResult} in {ids} to {id}. - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. - Append {pendingEntry} to {pending}. -- For each {update} of {updates}: - - Let {completed}, {errors}, and {incremental} be the corresponding entries on - {update}. - - For each {completedResult} in {completed}: - - Let {id} be the entry for {completedResult} on {ids}. - - If {id} is not defined, continue to the next {completedResult} in - {completed}. - - Remove the entry on {ids} for {completedResult}. - - Let {completedEntry} be an unordered map containing {id}. - - If {errors} is defined, set the corresponding entry on {completedEntry} to +- For each {completedEntry} in {completed} on {update}: + - Let {newCompletedEntry} be a new empty unordered map. + - Let {pendingResult} be the corresponding entry on {completedEntry}. + - Let {id} be the entry for {pendingResult} on {ids}. + - Remove the entry on {ids} for {pendingResult}. + - Set the corresponding entry on {newCompletedEntry} to {id}. + - Let {errors} be the corresponding entry on {completedEntry}. + - If {errors} is defined, set the corresponding entry on {newCompletedEntry} + to {errors}. + - Append {newCompletedEntry} to {completed}. +- For each {incrementalResult} in {incremental} on {update}: + - If {incrementalResult} represents completion of Stream Items: + - Let {stream} be the corresponding entry on {incrementalResult}. + - Let {id} be the corresponding entry on {ids} for {stream}. + - Let {items} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {items}, and {errors}. - - Append {completedEntry} to {completed}. - - For each {incrementalResult} in {incremental}: - - If {incrementalResult} represents completion of Stream Items: - - Let {stream} be the corresponding entry on {incrementalResult}. - - Let {id} be the corresponding entry on {ids} for {stream}. - - If {id} is not defined, continue to the next {incrementalResult} in - {incremental}. - - Let {items} and {errors} be the corresponding entries on - {incrementalResult}. - - Let {incrementalEntry} be an unordered map containing {id}, {items}, and - {errors}. - - Otherwise: - - Let {id} and {subPath} be the result of calling - {GetIdAndSubPath(incrementalResult, ids)}. - - If {id} is not defined, continue to the next {incrementalResult} in - {incremental}. - - Let {data} and {errors} be the corresponding entries on - {incrementalResult}. - - Let {incrementalEntry} be an unordered map containing {id}, {data}, and - {errors}. - - Append {incrementalEntry} to {incremental}. + - Otherwise: + - Let {id} and {subPath} be the result of calling + {GetIdAndSubPath(incrementalResult, ids)}. + - Let {data} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {data}, and + {errors}. + - Append {incrementalEntry} to {incremental}. - Let {hasNext} be {false} if {ids} is empty, otherwise {true}. - Let {payload} be an unordered map containing {hasNext}. -- If {pending} is not empty: - - Set the corresponding entry on {payload} to {pending}. -- If {incremental} is not empty: - - Set the corresponding entry on {payload} to {incremental}. -- If {completed} is not empty: - - Set the corresponding entry on {payload} to {completed}. +- If {pending} is not empty, set the corresponding entry on {payload} to + {pending}. +- If {incremental} is not empty, set the corresponding entry on {payload} to + {incremental}. +- If {completed} is not empty, set the corresponding entry on {payload} to + {completed}. - Return {ids} and {payload}. GetIdAndSubPath(deferredResult, ids): @@ -823,136 +762,259 @@ possibly: - Completing existing pending results. - Contributing data for the next payload. -- Containing additional futures. +- Containing additional pending results or futures. When encountering completed futures, {ProcessCompletedFutures()} calls itself recursively on any new futures in case they have been completed. -ProcessCompletedFutures(completedFutures, originalDeferStates, pending, updates, -newFutures, supplementalFutures, pendingFutures): +ProcessCompletedFutures(completedFutures, originalDeferStates, +originalNewFutures, originalUpdate). -- If {pending} is not provided, initialize it to the empty set. -- If {updates}, {newFutures}, {supplementalFutures}, or {pendingFutures} are not - provided, initialize them to empty lists. -- Let {deferStates} be {originalDeferStates}. -- Initialize {maybeCompletedNewFutures} and {maybeCompletedSupplementalFutures} - to empty lists. +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Let {pending}, {incremental}, and {completed} be new lists containing all the + items in the corresponding entries on {originalUpdate}. +- Let {newFutures} be a new set containing all members of {originalNewFutures}. - For each {completedFuture} in {completedFutures}: - - Let {result} be the result of {completedFuture}. - - If {result} represents completion of Stream Items: - - Let {update} and {resultFutures} be the result of calling - {GetUpdatesForStreamItems(result)}. - - Let {remainingPendingFutures} be {pendingFutures}. + - If {completedFuture} completes the initial result: + - Let {initialResult} be the result of {completedFuture}. + - Let {newPendingResults} and {futures} be the corresponding entries on + {initialResult}. + - Let {pending}, {newFutures}, and {deferStates} be the result of + {FilterDefers(newPendingResults, futures)}. + - Let {data} and {errors} be the corresponding entries on {initialResult}. + - Let {update} be a new unordered map containing {data}, {errors}, and + {pending}. + - Return {update}, {pending}, {newFutures}, and {deferStates}. + - Otherwise, if {completedFuture} incrementally completes a stream: + - Let {resultUpdate}, {resultPending}, {resultNewFutures}, and {deferStates} + be the result of {GetUpdateForStreamItems(deferStates, completedFuture)}. - Otherwise: - - Let {deferStates}, {update}, {resultPending}, and {resultFutures} be the - result of calling {GetUpdatesForDeferredResult(deferStates, result)}. - - Append all items in {resultPending} to {pending}. - - Initialize {remainingPendingFutures} an empty list. - - For each {future} in {pendingFutures}: - - Let {deferredFragments} be the Deferred Fragments completed by {future}. - - For each {deferredFragment} of {deferredFragments}: - - If {deferredFragment} is in {resultPending}, append {future} to - {maybeCompletedNewFutures}. - - Continue to the next {future} in {pendingFutures}. - - Append {future} to {remainingPendingFutures}. - - Append {update} to {updates}. - - For each {resultFuture} in {resultFutures}: - - Let {deferredFragments} be the Deferred Fragments completed by - {resultFuture}. - - For each {deferredFragment} of {deferredFragments}: - - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - - If {deferState} is defined: - - Append {resultFuture} to {maybeCompletedSupplementalFutures}. - - Continue to the next {resultFuture} in {resultFutures}. + - Let {resultUpdate}, {resultPending}, {resultNewFutures}, and {deferStates} + be the result of {GetUpdateForDeferredResult(deferStates, + completedFuture)}. + - Append all items in {resultPending} to {pending}. + - Add all items in {resultNewFutures} to {newFutures}. + - Append all of the items in {incremental} and {completed} on {resultUpdate} + to {incremental} and {completed}, respectively. +- Let {newCompletedFutures} be the completed futures from {newFutures}; let + {remainingNewFutures} be the remaining futures. +- Let {update} be a new unordered map containing {pending}, {incremental}, and + {completed}. +- If {newCompletedFutures} is empty: + - Return {update}, {newFutures}, and {deferStates}. +- Return the result of {ProcessCompletedFutures(newCompletedFutures, + deferStates, remainingNewFutures, update)}. + +FilterDefers(newPendingResults, futures, originalDeferStates): + +- Let {streamFutures} and {deferStates} be the result of + FilterDeferredFutures(originalDeferStates, futures). +- Initialize {pending} to an empty list. +- Let {pending}, {newFutures}, and {deferStates} be the result of + {FilterDoublyDeferredFragments(newPendingResults, deferStates)}. +- Return {pending}, {newFutures}, and {deferStates}. + +FilterDeferredFutures(deferStates, futures): + +- Initialize {streamFutures} to an empty list. +- Let {deferState} be a new unordered map containing all entries in + {originalDeferStates}. +- For each {future} of {futures}. + - If {future} incrementally completes a stream: + - Append {future} to {streamFutures}. + - Continue to the next {future} in {futures}. + - Let {deferredFragments} be a list of the Deferred Fragments incrementally + completed by {future}. + - For each {deferredFragment} of {deferredFragments}: + - Let {deferState} be the entry in {deferStates} for {deferredFragment}. + - If {deferState} is not defined: + - Let {pendingFutures} be a new set containing {future}. + - Let {count} be {1}. + - Let {newDeferState} be a new unordered map containing {pendingFutures} + and {count}. - Otherwise: - - Append {resultFuture} to {maybeCompletedNewFutures}. -- Let {completedFutures} be a list containing all completed futures from - {maybeCompletedNewFutures} and {maybeCompletedSupplementalFutures}; append the - remaining futures to {newFutures} and {supplementalFutures}, respectively. -- If {completedFutures} is empty: - - Let {pendingResults} and {deferStates} be the result of - {ProcessNewFutures(newFutures, originalDeferStates)}. - - Add all items in {pendingResults} to {pending}. - - Return {deferStates}, {pending}, {updates}, {newFutures}, and - {supplementalFutures}, {remainingPendingFutures}. -- Let {pendingResults} and {deferStates} be the results of - {ProcessNewFutures(supplementalFutures, deferStates)}. -- Add all items in {pendingResults} to {pending}. -- Return the result of {ProcessCompletedFutures(newFutures, deferStates, - pending, updates, newFutures, supplementalFutures, remainingPendingFutures)}. - -GetUpdatesForStreamItems(streamItems): - -- Let {stream}, {items}, and {errors} be the corresponding entries on - {streamItems}. + - Let {newDeferState} be a new unordered map containing all entries in + {deferState}. + - Reset {pendingFutures} on {newDeferState} to a set containing all of its + original members as well as {future}. + - Increment {count} on {newDeferState}. + - Set the entry for {deferredFragment} in {deferStates} to {newDeferState}. +- Return {streamFutures} and {deferStates}. + +FilterDoublyDeferredFragments(newPendingResults, originalDeferStates): + +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Initialize {pending} and {newFutures} to empty lists. +- Initialize {newFutures} to the empty set. +- For each {newPendingResult} in {newPendingResults}: + - If {newPendingResult} will incrementally complete a stream: + - Append {newPendingResult} to {pending}. + - Continue to the next {newPendingResult} in {newPendingResults}. + - Let {deferState} be the entry in {deferStates} for {newPendingResult}. + - If {deferState} is not defined: + - Continue to the next {newPendingResult} in {newPendingResults}. + - Let {parent} be the result of {GetNonEmptyParent(newPendingResult, + deferStates)}. + - If {parent} is not defined: + - Append {newPendingResult} to {pending}. + - Let {deferState} be the entry in {deferStates} for {newPendingResult}. + - Let {pendingFutures} be the corresponding entry on {deferState}. + - Append all items in {pendingFutures} to {newFutures}. + - Continue to the next {newPendingResult} in {newPendingResults}. + - Let {parentDeferState} be the entry in {deferStates} for {parent}. + - Let {newParentDeferState} be a new unordered map containing all entries in + {parentDeferState}. + - Set the entry for {parent} in {deferStates} to {newParentDeferState}. + - Let {newChildren} be a new list containing all members of {children} on + {newParentDeferState} as well as {newPendingResult}. + - Set the entry for {children} in {newParentDeferState} to {newChildren}. +- Return {pending} and {deferStates}. + +GetNonEmptyParent(deferredFragment, futuresByFragment): + +- Let {parent} be the corresponding entry on {deferredFragment}. +- If {parent} is not defined, return. +- Let {parentFutures} be the entry for {parent} on {futuresByFragment}. +- If {parentFutures} is not defined, return the result of + {GetNonEmptyParent(parent, futuresByFragment)}. +- Return {parent}. + +GetUpdateForStreamItems(originalDeferStates, completedFuture): + +- Let {streamItems} be the result of {completedFuture}. +- Let {deferStates} be a new unordered map containing all the entries in + {originalDeferStates}. +- Let {stream}, {items}, and {errors} be the corresponding entries on {result}. - If {items} is not defined, the stream has asynchronously ended: - - Let {completed} be a list containing {stream}. + - Let {completedEntry} be an empty unordered map. + - Set the entry for {pendingResult} on {completedEntry} to {stream}. + - Let {completed} be an list containing {completedEntry}. - Let {update} be an unordered map containing {completed}. - Otherwise, if {items} is {null}: - - Let {completed} be a list containing {stream}. - Let {errors} be the corresponding entry on {streamItems}. + - Let {completedEntry} be an unordered map containing {errors}. + - Set the entry for {pendingResult} on {completedEntry} to {stream}. + - Let {completed} be a list containing {completedEntry}. - Let {update} be an unordered map containing {completed} and {errors}. - Otherwise: - Let {incremental} be a list containing {streamItems}. - Let {update} be an unordered map containing {incremental}. - - Let {futures} be the corresponding entry on {streamItems}. -- Return {update} and {futures}. + - Let {newPendingResults} and {futures} be the corresponding entries on + {streamItems}. + - Let {pending}, {newFutures}, and {deferStates} be the result of + {FilterDefers(newPendingResults, futures, originalDeferStates)}. +- Return {update}, {pending}, {newFutures}, and {deferStates}. -GetUpdatesForDeferredResult(originalDeferStates, deferredResult): +GetUpdateForDeferredResult(originalDeferStates, completedFuture): -- Let {deferStates} be a new unordered map containing all of the entries in +- Let {deferredResult} be the result of {completedFuture}. +- Let {deferStates} be a new unordered map containing all the entries in {originalDeferStates}. -- Initialize {futures} to an empty list. +- Initialize {newFutures} to the empty set. - Let {deferredFragments}, {data}, and {errors} be the corresponding entries on {deferredResult}. -- Initialize {completed} to an empty list. - If {data} is {null}: + - Initialize {completed} to an empty list. - For each {deferredFragment} of {deferredFragments}: - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - If {deferState} is not defined, continue to the next {deferredFragment} of {deferredFragments}. - - Remove the entry for {deferredFragment} on {completed}. - - Append {deferredFragment} to {completed}. - - Let {update} be an unordered map containing {completed} and {errors}. - - Return {update} and {futures}. -- Initialize {incremental} to an empty list. -- Initialize {newPending} to the empty set. + - Let {deferStates} be the result of {RemoveFragment(deferredFragment, + deferState, deferStates)}. + - Let {completedEntry} be an unordered map containing {errors}. + - Set the entry for {pendingResult} in {completedEntry} to + {deferredFragment}. + - Append {completedEntry} to {completed}. + - Let {update} be an unordered map containing {completed}. + - Return {update}, {newFutures}, and {deferStates}. +- Initialize {pending}, {incremental} and {completed} to empty lists. - For each {deferredFragment} of {deferredFragments}: - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - If {deferState} is not defined, continue to the next {deferredFragment} of {deferredFragments}. - Let {newDeferState} be a new unordered map containing all entries on {deferState}. - - Set the entry for {deferredFragment} on {deferStates} to {newDeferState}. - Decrement {count} on {newDeferState}. - - Let {pending} be a new set containing all of the members of {pending} on - {newDeferState}. - - Set the corresponding entry on {newDeferState} to {pending}. - - Add {deferredResult} to {pending}. - - If {count} on {newDeferState} is equal to {0}: - - Let {children} be the corresponding entry on {newDeferState}. - - Add all items in {children} to {newPending}. - - Remove the entry for {deferredFragment} on {deferStates}. - - Append {deferredFragment} to {completed}. - - Append all items in {pending} on {newDeferState} to {incremental}. -- For each {deferredResult} in {incremental}: - - Let {deferredFragments} be the corresponding entry on {deferredResult}. - - For each {deferredFragment} in {deferredFragments}: - - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - - If {deferState} is not defined, continue to the next {deferredFragment} of - {deferredFragments}. - - Let {pending} be the corresponding entry on {deferState}. - - If {pending} contains {deferredResult}: - - Let {newDeferState} be a new unordered map containing all entries on - {deferState}. - - Set the entry for {deferredFragment} on {deferStates} to - {newDeferState}. - - Let {pending} be a new set containing all of the members of {pending} on - {newDeferState}. - - Set the corresponding entry on {newDeferState} to {pending}. - - Remove {deferredResult} from {pending}. -- Let {update} be an unordered map containing {incremental} and {completed}. -- Return {deferStates}, {update}, {newPending}, and {futures}. + - Let {newCompletedFutures} be a new set containing all members of + {completedFutures} on {newDeferState}. + - Set the {completedFutures} entry on {newDeferState} to + {newCompletedFutures}. + - Add {completedFuture} to {newCompletedFutures}. + - Let {count} be the corresponding entry on {newDeferState}. + - If {count} is {0}: + - Let {deferStates}, {fragmentPending}, {fragmentIncremental}, + {fragmentCompleted}, and {fragmentNewFutures} be the result of + {CompleteFragment(deferredFragment, deferState, deferStates)}. + - Append all items in {fragmentPending}, {fragmentIncremental}, and + {fragmentCompleted} to {pending}, {incremental}, and {completed}, + respectively. + - Add all items in {fragmentNewFutures} to {newFutures}. +- Let {update} be an unordered map containing {pending}, {incremental} and + {completed}. +- Return {update}, {newFutures}, and {deferStates}. + +RemoveFragment(deferredFragment, deferState, originalDeferStates): + +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Remove the entry for {deferredFragment} on {deferStates}. +- Let {children} be the corresponding entry on {deferState}. +- For each {child} of {children}: + - Let {childDeferState} be the entry on {deferStates} for {child}. + - If {childDeferState} is not defined, continue to the next {child} of + {children}. + - Let {deferStates} be the result of {RemoveFragment(child, childDeferState, + deferStates)}. +- Return {deferStates}. + +CompleteFragment(deferredFragment, deferState, originalDeferStates): + +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Remove the entry for {deferredFragment} on {deferStates}. +- Let {completedFutures} be the corresponding entry on {deferState}. +- Initialize {pending}, {incremental}, and {completed} to empty lists. +- Initialize {newFutures} to the empty set. +- For each {completedFuture} in {completedFutures}: + - Let {deferredResult} be the result of {completedFuture}. + - Append {deferredResult} to {incremental}. + - Let {newPendingResults} and {futures} be the corresponding entries on + {deferredResult}. + - Let {resultPending}, {resultNewFutures}, and {deferStates} be the result of + {FilterDefers(newPendingResults, futures, deferStates)}. + - Append all items in {resultPending} to {pending}. + - Add all items in {resultNewFutures} to {newFutures}. + - Let {deferState} be the result of {RemoveFuture(completedFuture, + deferStates)}. +- Append {deferredFragment} to {completed}. +- Let {children} be the corresponding entry on {deferState}. +- Append all items in {children} to {pending}. +- For each {child} of {children}: + - Let {childDeferState} be the entry for {child} on {deferStates}. + - Let {deferStates}, {childPending}, {childIncremental}, {childCompleted}, and + {childNewFutures} be the result of {CompleteFragment(child, childDeferState, + deferStates)}. + - Append all items in {childPending}, {childIncremental}, and {childCompleted} + to {pending}, {incremental}, and {completed}, respectively. + - Add all items in {childNewFutures} to {newFutures}. +- Return {deferStates}, {pending}, {incremental}, {completed}, and {newFutures}. + +RemoveFuture(completedFuture, deferStates): + +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Let {deferredResult} be the result of {completedFuture}. +- Let {deferredFragments} be the corresponding entry on {deferredResult}. +- For each {deferredFragment} in {deferredFragments}: + - Let {deferState} be the entry on {deferStates} for {deferredFragment}. + - If {deferState} is not defined, continue to the next {deferredFragment} of + {deferredFragments}. + - Reset {pendingFutures} and {completedFutures} on {deferState} to new sets + containing all of their original members, respectively, except for + {completedFuture}. +- Return {deferStates}. ## Executing a Field Plan @@ -967,27 +1029,32 @@ variableValues, serial, path, deferUsageSet, deferMap): - Let {groupedFieldSet}, {newGroupedFieldSets}, {newDeferUsages}, and {newGroupedFieldSetsRequiringDeferral} be the corresponding entries on {fieldPlan}. -- Let {newDeferMap} be the result of {GetNewDeferMap(newDeferUsages, path, - deferMap)}. +- Let {newPendingResults} and {newDeferMap} be the result of + {GetNewDeferredFragments(newDeferUsages, path, deferMap)}. +- Let {supplementalFutures} be the result of {GetFutures(objectType, + objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. +- Let {deferredFutures} be the result of {GetFutures(objectType, objectValue, + variableValues, newGroupedFieldSetsRequiringDeferral, path, newDeferMap)}. +- Let {futures} be a list containing all members of {supplementalFutures} and + {deferredFutures}. - Allowing for parallelization, perform the following steps: - - Let {data} and {nestedFutures} be the result of running - {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + - Let {data}, {newPendingResults}, and {nestedFutures} be the result of + running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - - Let {futures} be the result of {ExecuteDeferredGroupedFieldSets(objectType, - objectValue, variableValues, newGroupedFieldSets, false, path, - newDeferMap)}. - - Let {deferredFutures} be the result of - {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, - newGroupedFieldSetsRequiringDeferral, true, path, newDeferMap)}. -- Let {futures} be a list containing {future}, {deferredFutures}, and all items - in {nestedFutures}. -- Return {data} and {futures}. - -GetNewDeferMap(newDeferUsages, path, deferMap): + - Initiate all futures in {supplementalFutures}. + - If early execution of deferred fields is desired, following any + implementation specific deferral of further execution, initiate all futures + in {deferredFutures}. +- Append all items in {nestedNewPendingResults} and {nestedFutures} to + {newPendingResults} and {futures}. +- Return {data}, {newPendingResults}, and {futures}. + +GetNewDeferredFragments(newDeferUsages, path, deferMap): - If {newDeferUsages} is empty: - Return {deferMap}. +- Initialize {newDeferredFragments} to an empty list. - Let {newDeferMap} be a new unordered map of Defer Usage records to Deferred Fragment records containing all of the entries in {deferMap}. - For each {deferUsage} in {newDeferUsages}: @@ -996,8 +1063,38 @@ GetNewDeferMap(newDeferUsages, path, deferMap): - Let {label} be the corresponding entry on {deferUsage}. - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} and {label}. + - Append {newDeferredFragment} to {newDeferredFragments}. - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. -- Return {newDeferMap}. +- Return {newDeferredFragments} and {newDeferMap}. + +GetFutures(objectType, objectValue, variableValues, newGroupedFieldSets, path, +deferMap): + +- Initialize {futures} to an empty list. +- For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: + - Let {deferredFragments} be an empty list. + - For each {deferUsage} in {deferUsageSet}: + - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. + - Append {deferredFragment} to {deferredFragments}. + - Let {future} represent the future execution of + {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, deferredFragments, path, deferUsageSet, deferMap)}, + incrementally completing {deferredFragments}. + - Append {future} to {futures}. +- Return {futures}. + +ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, +variableValues, path, deferUsageSet, deferMap): + +- Let {data}, {newPendingResults}, and {futures} be the result of running + {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, + variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing + parallelization). +- Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. +- Let {deferredResult} be an unordered map containing {path}, + {deferredFragments}, {data}, {errors}, {newPendingResults}, and {futures}. +- Return {deferredResult}. ## Executing a Grouped Field Set @@ -1012,19 +1109,20 @@ ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, deferMap): - Initialize {resultMap} to an empty ordered map. -- Initialize {futures} to an empty list. +- Initialize {newPendingResults} and {futures} to empty lists. - For each {groupedFieldSet} as {responseKey} and {fields}: - Let {fieldName} be the name of the first entry in {fields}. Note: This value is unaffected if an alias is used. - Let {fieldType} be the return type defined for the field {fieldName} of {objectType}. - If {fieldType} is defined: - - Let {responseValue} and {fieldFutures} be the result of - {ExecuteField(objectType, objectValue, fieldType, fields, variableValues, - path)}. + - Let {responseValue}, {fieldNewPendingResults}, and {fieldFutures} be the + result of {ExecuteField(objectType, objectValue, fieldType, fields, + variableValues, path)}. - Set {responseValue} as the value for {responseKey} in {resultMap}. - - Append all futures in {fieldFutures} to {futures}. -- Return {resultMap} and {futures}. + - Append all items in {fieldNewPendingResults} and {fieldFutures} to + {newPendingResults} and {futures}, respectively. +- Return {resultMap}, {newPendingResults}, and {futures}. Note: {resultMap} is ordered by which fields appear first in the operation. This is explained in greater detail in the Field Collection section above. @@ -1168,45 +1266,7 @@ A correct executor must generate the following result for that selection set: When subsections contain a `@stream` or `@defer` directive, these subsections are no longer required to execute serially. Execution of the deferred or -streamed sections of the subsection may be executed in parallel, as defined in -{ExecuteDeferredGroupedFieldSets} and {ExecuteStreamField}. - -## Executing Deferred Grouped Field Sets - -ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, -newGroupedFieldSets, shouldInitiateDefer, path, deferMap): - -- Initialize {futures} to an empty list. -- For each {deferUsageSet} and {newGroupedFieldSet} in {newGroupedFieldSets}: - - Let {deferredFragments} be an empty list. - - For each {deferUsage} in {deferUsageSet}: - - Let {deferredFragment} be the entry for {deferUsage} in {deferMap}. - - Append {deferredFragment} to {deferredFragments}. - - Let {groupedFieldSet} be the corresponding entries on {newGroupedFieldSet}. - - Let {future} represent the future execution of - {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, - variableValues, deferredFragments, path, deferUsageSet, deferMap)}, - incrementally completing {deferredFragments}. - - If {shouldInitiateDefer} is {false}: - - Initiate {future}. - - Otherwise, if early execution of deferred fields is desired: - - Following any implementation specific deferral of further execution, - initiate {future}. - - Append {future} to {futures}. -- Return {futures}. - -ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, -variableValues, path, deferUsageSet, deferMap): - -- Let {data} and {futures} be the result of running - {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, - variableValues, path, deferUsageSet, deferMap)} _normally_ (allowing - parallelization). -- Let {errors} be the list of all _field error_ raised while executing the - {groupedFieldSet}. -- Let {deferredResult} be an unordered map containing {path}, - {deferredFragments}, {data}, {errors}, and {futures}. -- Return {deferredResult}. +streamed sections of the subsection may be executed in parallel. ## Executing Fields @@ -1333,14 +1393,14 @@ deferUsageSet, deferMap): - If the {fieldType} is a Non-Null type: - Let {innerType} be the inner type of {fieldType}. - - Let {completedResult} and {futures} be the result of calling - {CompleteValue(innerType, fields, result, variableValues, path)}. + - Let {completedResult}, {newPendingResults}, and {futures} be the result of + calling {CompleteValue(innerType, fields, result, variableValues, path)}. - If {completedResult} is {null}, raise a _field error_. - - Return {completedResult} and {futures}. + - Return {completedResult}, {newPendingResults}, and {futures}. - If {result} is {null} (or another internal value similar to {null} such as {undefined}), return {null}. - If {fieldType} is a List type: - - Initialize {futures} to an empty list. + - Initialize {newPendingResults} and {futures} to empty lists. - If {result} is not a collection of values, raise a _field error_. - Let {field} be the first entry in {fields}. - Let {innerType} be the inner type of {fieldType}. @@ -1371,16 +1431,18 @@ deferUsageSet, deferMap): - Following any implementation specific deferral of further execution, initiate {future}. - Append {future} to {futures}. - - Otherwise: - - Wait for the next item from {result} via the {iterator}. - - If an item is not retrieved because of an error, raise a _field error_. - - Let {item} be the item retrieved from {result}. - - Let {itemPath} be {path} with {index} appended. - - Let {completedItem} and {itemFutures} be the result of calling - {CompleteValue(innerType, fields, item, variableValues, itemPath)}. - - Append {completedItem} to {items}. - - Append all futures in {itemFutures} to {futures}. - - Return {items} and {futures}. + - Return {items}, {newPendingResults}, and {futures}. + - Wait for the next item from {result} via the {iterator}. + - If an item is not retrieved because of an error, raise a _field error_. + - Let {item} be the item retrieved from {result}. + - Let {itemPath} be {path} with {index} appended. + - Let {completedItem}, {itemNewPendingResults}, and {itemFutures} be the + result of calling {CompleteValue(innerType, fields, item, variableValues, + itemPath)}. + - Append {completedItem} to {items}. + - Append all items in {itemNewPendingResults}, and {itemFutures} to + {newPendingResults}, and {futures}, respectively. + - Return {items}, {newPendingResults}, and {futures}. - If {fieldType} is a Scalar or Enum type: - Return the result of {CoerceResult(fieldType, result)}. - If {fieldType} is an Object, Interface, or Union type: @@ -1412,10 +1474,10 @@ variableValues): - Let {path} be the corresponding entry on {stream}. - Let {itemPath} be {path} with {index} appended. - Wait for the next item from {iterator}. -- If {iterator} is closed, return. +- If {iterator} is closed, complete this data stream and return. - Let {item} be the next item retrieved via {iterator}. - Let {nextIndex} be {index} plus one. -- Let {completedItem} and {itemFutures} be the result of +- Let {completedItem}, {newPendingResults}, and {futures} be the result of {CompleteValue(innerType, fields, item, variableValues, itemPath)}. - Initialize {items} to an empty list. - Append {completedItem} to {items}. @@ -1426,10 +1488,9 @@ variableValues): - If early execution of streamed fields is desired: - Following any implementation specific deferral of further execution, initiate {future}. -- Initialize {futures} to a list containing {future}. -- Append all futures in {itemFutures} to {futures}. +- Append {future} to {futures}. - Let {streamedItems} be an unordered map containing {stream}, {items} {errors}, - and {futures}. + {newPendingResults}, and {futures}. - Return {streamedItem}. **Coercing Results** @@ -1546,7 +1607,7 @@ resolves to {null}, then the entire list must resolve to {null}. If the `List` type is also wrapped in a `Non-Null`, the field error continues to propagate upwards. -When a field error is raised inside `ExecuteDeferredGroupedFieldSets` or +When a field error is raised inside `ExecuteDeferredGroupedFieldSet` or `ExecuteStreamField`, the defer and stream payloads act as error boundaries. That is, the null resulting from a `Non-Null` type cannot propagate outside of the boundary of the defer or stream payload. From ae882e80057c2bb4b627d138848cfee9429a72e5 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 19 Jan 2024 10:41:09 +0200 Subject: [PATCH 21/46] change to have an incremental update stream mapped to an incremental payload stream The update stream suppresses future completion that should not lead to a response, i.e. now all incremental entries for a given fragment have been completed. The payload stream does the response formatting. NOTE: I think it is possible that we can move payload response formatting out of the execution section entirely, and into the Response section. --- spec/Section 6 -- Execution.md | 425 ++++++++++++++++++--------------- 1 file changed, 236 insertions(+), 189 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 924703638..75884438d 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -343,9 +343,11 @@ serial): - Let {incrementalResults} be the result of {YieldIncrementalResults(futures)}. - Wait for the first result in {incrementalResults} to be available. - Let {initialResult} be that result. -- If {hasNext} on {initialResult} is not {true}: - - Return {initialResult}. -- Return {initialResult} and {incrementalResults}. +- Let {initialResponse} and {ids} be the result of + {GetInitialResponse(initialResult)}. +- Let {subsequentResponses} be the result of running + {MapSubsequentResultToResponse(incrementalResult, ids)}. +- Return {initialResponse} and {subsequentResponses}. ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial): @@ -363,6 +365,106 @@ serial): {newPendingResults}, and {futures}. - Return {initialResult}. +MapSubsequentResultToResponse(subsequentResultStream, originalIds): + +- Let {ids} be a new unordered map containing all of the entries in + {originalIds}. +- Return a new event stream {subsequentResponseStream} which yields events as + follows: +- For each {result} on {subsequentResultStream}: + - Let {response} and {ids} be the result of {GetSubsequentPayload(update, + ids)}. + - Yield an event containing {response}. +- When {subsequentResultStream} completes: complete this event stream. + +GetInitialResponse(initialResult): + +- Let {newPendingResults} be entry for {pending} on {initialResult}. +- Let {pending} and {ids} be the result of {GetPending(pending)}. +- Let {data} and {errors} be the corresponding entries on {initialResult}. +- Let {initialResponse} be an unordered map containing {data} and {errors}. +- If {pending} is not empty: + - Set the corresponding entry on {payload} to {pending}. + - Set the entry for {hasNext} on {payload} to {true}. +- Return {initialResponse} and {ids}. + +GetPending(newPendingResults, originalIds): + +- Let {ids} be a new unordered map containing all of the entries in + {originalIds}. +- Initialize {pending} to an empty list. +- For each {newPendingResult} in {newPendingResults}: + - Let {path} and {label} be the corresponding entries on {newPendingResult}. + - Let {id} be a unique identifier for this {newPendingResult}. + - Set the entry for {newPendingResult} in {ids} to {id}. + - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. + - Append {pendingEntry} to {pending}. +- Return {pending} and {ids}. + +GetSubsequentResponse(update, originalIds): + +- Let {newPendingResults} be entry for {pending} on {update}. +- Let {pending} and {ids} be the result of {GetPending(pending, originalIds)}. +- Initialize {incremental} and {completed} to empty lists. +- For each {completedEntry} in {completed} on {update}: + - Let {newCompletedEntry} be a new empty unordered map. + - Let {pendingResult} be the corresponding entry on {completedEntry}. + - Let {id} be the entry for {pendingResult} on {ids}. + - Remove the entry on {ids} for {pendingResult}. + - Set the corresponding entry on {newCompletedEntry} to {id}. + - Let {errors} be the corresponding entry on {completedEntry}. + - If {errors} is defined, set the corresponding entry on {newCompletedEntry} + to {errors}. + - Append {newCompletedEntry} to {completed}. +- For each {incrementalResult} in {incremental} on {update}: + - If {incrementalResult} represents completion of Stream Items: + - Let {stream} be the corresponding entry on {incrementalResult}. + - Let {id} be the corresponding entry on {ids} for {stream}. + - Let {items} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {items}, and + {errors}. + - Otherwise: + - Let {id} and {subPath} be the result of calling + {GetIdAndSubPath(incrementalResult, ids)}. + - Let {data} and {errors} be the corresponding entries on + {incrementalResult}. + - Let {incrementalEntry} be an unordered map containing {id}, {data}, and + {errors}. + - Append {incrementalEntry} to {incremental}. +- Let {hasNext} be {false} if {ids} is empty, otherwise {true}. +- Let {payload} be an unordered map containing {hasNext}. +- If {pending} is not empty, set the corresponding entry on {payload} to + {pending}. +- If {incremental} is not empty, set the corresponding entry on {payload} to + {incremental}. +- If {completed} is not empty, set the corresponding entry on {payload} to + {completed}. +- Return {ids} and {payload}. + +GetIdAndSubPath(deferredResult, ids): + +- Initialize {releasedFragments} to an empty list. +- Let {deferredFragments} be the corresponding entry on {deferredResult}. +- For each {deferredFragment} in {deferredFragments}: + - Let {id} be the entry for {deferredFragment} on {ids}. + - If {id} is defined, append {deferredFragment} to {releasedFragments}. +- Let {currentFragment} be the first member of {releasedFragments}. +- Let {currentPath} be the entry for {path} on {firstDeferredFragment}. +- Let {currentPathLength} be the length of {currentPath}. +- For each remaining {deferredFragment} within {deferredFragments}. + - Let {path} be the corresponding entry on {deferredFragment}. + - Let {pathLength} be the length of {path}. + - If {pathLength} is larger than {currentPathLength}: + - Set {currentPathLength} to {pathLength}. + - Set {currentFragment} to {deferredFragment}. +- Let {id} be the entry on {ids} for {currentFragment}. +- If {id} is not defined, return. +- Let {path} be the corresponding entry on {currentFragment}. +- Let {subPath} be the subset of {path}, omitting the first {currentPathLength} + entries. +- Return {id} and {subPath}. + ### Field Collection Before execution, selection set(s) are converted to a field plan via a two-step @@ -638,120 +740,34 @@ initiated. Then, any completed deferred or streamed results are processed to determine the payload to be yielded. Finally, if any pending results remain, the procedure is repeated recursively. -YieldIncrementalResults(newFutures, originalIds, originalDeferStates, -originalRemainingFutures): +YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): -- Let {maybeCompletedFutures} be a new set containing all members of - {originalRemainingFutures}. +- Let {futureStates} be a new unordered map containing all entries in + {originalFutureStates}. - For each {future} in {newFutures}: - If {future} is not initiated, initiate it. - - Add {future} to {maybeCompletedFutures}. + - Let {futureState} be the entry for {future} in {futureStates}. + - If {futureState} is not defined: + - Let {futureState} be a new unordered map. + - If {futureState} incrementally completes Deferred Fragments: + - Let {defers} be those Deferred Fragments. + - Let {count} be {0}. + - Set the corresponding entries on {futureState} to {count} and {defers}. + - Set the entry for {future} in {futureStates} to {futureState}. +- Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. - Wait for any futures within {maybeCompletedFutures} to complete. -- Let {completedFutures} be the completed futures; let {remainingFutures} be the - remaining futures. -- Let {update}, {newestFutures}, and {deferStates} be the result of - {ProcessCompletedFutures(completedFutures, originalDeferStates)}. -- If {data} is defined on {update}: - - Let {ids} and {payload} be the result of {GetInitialPayload(update)}. - - Yield {payload}. - - If {hasNext} on {payload} is not {true}, complete this incremental result - stream and return. -- Otherwise: - - Let {ids} and {payload} be the result of {GetSubsequentPayload(pending, - originalIds, update)}. - - If {hasNext} is not the only entry on {payload}, yield {payload}. - - If {hasNext} on {payload} is {false}, complete this incremental result - stream and return. -- Yield the results of {YieldIncrementalResults(newestFutures, ids, deferStates, - remainingFutures)}. - -GetInitialPayload(update): - -- Let {ids} be a new unordered map. -- Initialize {pending} to an empty list. -- For each {newPendingResult} in {pending} on {update}: - - Let {path} and {label} be the corresponding entries on {newPendingResult}. - - Let {id} be a unique identifier for this {newPendingResult}. - - Set the entry for {newPendingResult} in {ids} to {id}. - - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. - - Append {pendingEntry} to {pending}. -- Let {data} and {errors} be the corresponding entries on {initialResult}. -- Let {payload} be an unordered map containing {data} and {errors}. -- If {data} is {null}, return {ids} and {payload}. -- If {pending} is not empty: - - Set the corresponding entry on {payload} to {pending}. - - Set the entry for {hasNext} on {payload} to {true}. -- Return {ids} and {payload}. - -GetSubsequentPayload(newPendingResults, originalIds, update): - -- Let {ids} be a new unordered map containing all of the entries in - {originalIds}. -- Initialize {pending}, {incremental} and {completed} to empty lists. -- For each {newPendingResult} in {pending} on {update}: - - Let {path} and {label} be the corresponding entries on {newPendingResult}. - - Let {id} be a unique identifier for this {newPendingResult}. - - Set the entry for {newPendingResult} in {ids} to {id}. - - Let {pendingEntry} be an unordered map containing {path}, {label}, and {id}. - - Append {pendingEntry} to {pending}. -- For each {completedEntry} in {completed} on {update}: - - Let {newCompletedEntry} be a new empty unordered map. - - Let {pendingResult} be the corresponding entry on {completedEntry}. - - Let {id} be the entry for {pendingResult} on {ids}. - - Remove the entry on {ids} for {pendingResult}. - - Set the corresponding entry on {newCompletedEntry} to {id}. - - Let {errors} be the corresponding entry on {completedEntry}. - - If {errors} is defined, set the corresponding entry on {newCompletedEntry} - to {errors}. - - Append {newCompletedEntry} to {completed}. -- For each {incrementalResult} in {incremental} on {update}: - - If {incrementalResult} represents completion of Stream Items: - - Let {stream} be the corresponding entry on {incrementalResult}. - - Let {id} be the corresponding entry on {ids} for {stream}. - - Let {items} and {errors} be the corresponding entries on - {incrementalResult}. - - Let {incrementalEntry} be an unordered map containing {id}, {items}, and - {errors}. - - Otherwise: - - Let {id} and {subPath} be the result of calling - {GetIdAndSubPath(incrementalResult, ids)}. - - Let {data} and {errors} be the corresponding entries on - {incrementalResult}. - - Let {incrementalEntry} be an unordered map containing {id}, {data}, and - {errors}. - - Append {incrementalEntry} to {incremental}. -- Let {hasNext} be {false} if {ids} is empty, otherwise {true}. -- Let {payload} be an unordered map containing {hasNext}. -- If {pending} is not empty, set the corresponding entry on {payload} to - {pending}. -- If {incremental} is not empty, set the corresponding entry on {payload} to - {incremental}. -- If {completed} is not empty, set the corresponding entry on {payload} to - {completed}. -- Return {ids} and {payload}. - -GetIdAndSubPath(deferredResult, ids): - -- Initialize {releasedFragments} to an empty list. -- Let {deferredFragments} be the corresponding entry on {deferredResult}. -- For each {deferredFragment} in {deferredFragments}: - - Let {id} be the entry for {deferredFragment} on {ids}. - - If {id} is defined, append {deferredFragment} to {releasedFragments}. -- Let {currentFragment} be the first member of {releasedFragments}. -- Let {currentPath} be the entry for {path} on {firstDeferredFragment}. -- Let {currentPathLength} be the length of {currentPath}. -- For each remaining {deferredFragment} within {deferredFragments}. - - Let {path} be the corresponding entry on {deferredFragment}. - - Let {pathLength} be the length of {path}. - - If {pathLength} is larger than {currentPathLength}: - - Set {currentPathLength} to {pathLength}. - - Set {currentFragment} to {deferredFragment}. -- Let {id} be the entry on {ids} for {currentFragment}. -- If {id} is not defined, return. -- Let {path} be the corresponding entry on {currentFragment}. -- Let {subPath} be the subset of {path}, omitting the first {currentPathLength} - entries. -- Return {id} and {subPath}. +- Let {completedFutures} be those completed futures. +- Let {update}, {newestFutures}, {futureStates}, and {deferStates} be the result + of {ProcessCompletedFutures(completedFutures, originalFutureStates, + originalDeferStates)}. +- Let {data}, {incremental}, and {completed} be the corresponding entries on + {update}. +- If {data} is defined, or if either {incremental} and {completed} are not + empty, yield {update}. +- If {futureStates} is empty, complete this incremental result stream and + return. +- Yield the results of {YieldIncrementalResults(newestFutures, futureStates, + deferStates)}. ### Processing Completed Futures @@ -767,15 +783,19 @@ possibly: When encountering completed futures, {ProcessCompletedFutures()} calls itself recursively on any new futures in case they have been completed. -ProcessCompletedFutures(completedFutures, originalDeferStates, -originalNewFutures, originalUpdate). +ProcessCompletedFutures(completedFutures, originalFutureStates, +originalDeferStates, originalNewFutures, originalUpdate). -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. +- Let {futureStates} be a new unordered map containing all entries in + {originalFutureStates}. +- Let {deferStates} be {originalDeferStates}. - Let {pending}, {incremental}, and {completed} be new lists containing all the items in the corresponding entries on {originalUpdate}. - Let {newFutures} be a new set containing all members of {originalNewFutures}. - For each {completedFuture} in {completedFutures}: + - Let {futureState} be the entry for {completedFuture} in {futureStates}. + - If {futureState} is not defined, continue to the next {completedFuture} in + {completedFutures}. - If {completedFuture} completes the initial result: - Let {initialResult} be the result of {completedFuture}. - Let {newPendingResults} and {futures} be the corresponding entries on @@ -785,40 +805,43 @@ originalNewFutures, originalUpdate). - Let {data} and {errors} be the corresponding entries on {initialResult}. - Let {update} be a new unordered map containing {data}, {errors}, and {pending}. + - Remove the entry for {completedFuture} from {futureStates}. - Return {update}, {pending}, {newFutures}, and {deferStates}. - Otherwise, if {completedFuture} incrementally completes a stream: - Let {resultUpdate}, {resultPending}, {resultNewFutures}, and {deferStates} be the result of {GetUpdateForStreamItems(deferStates, completedFuture)}. + - Remove the entry for {completedFuture} from {futureStates}. - Otherwise: - - Let {resultUpdate}, {resultPending}, {resultNewFutures}, and {deferStates} - be the result of {GetUpdateForDeferredResult(deferStates, - completedFuture)}. + - Let {sent} be the corresponding entry on {futureState}. + - If {sent} is {true}, continue to the next {completedFuture} in + {completedFutures}. + - Let {resultUpdate}, {resultPending}, {resultNewFutures}, {futureStates}, + and {deferStates} be the result of + {GetUpdateForDeferredResult(futureStates, deferStates, completedFuture)}. - Append all items in {resultPending} to {pending}. - Add all items in {resultNewFutures} to {newFutures}. - Append all of the items in {incremental} and {completed} on {resultUpdate} to {incremental} and {completed}, respectively. -- Let {newCompletedFutures} be the completed futures from {newFutures}; let - {remainingNewFutures} be the remaining futures. +- Let {newCompletedFutures} be the completed futures from {newFutures}. - Let {update} be a new unordered map containing {pending}, {incremental}, and {completed}. - If {newCompletedFutures} is empty: - - Return {update}, {newFutures}, and {deferStates}. + - Return {update}, {newFutures}, {futureStates}, and {deferStates}. - Return the result of {ProcessCompletedFutures(newCompletedFutures, - deferStates, remainingNewFutures, update)}. + futureStates, deferStates, update)}. FilterDefers(newPendingResults, futures, originalDeferStates): - Let {streamFutures} and {deferStates} be the result of - FilterDeferredFutures(originalDeferStates, futures). -- Initialize {pending} to an empty list. + GetStreamFutures(originalDeferStates, futures). - Let {pending}, {newFutures}, and {deferStates} be the result of - {FilterDoublyDeferredFragments(newPendingResults, deferStates)}. + {GetSinglyDeferredFutures(newPendingResults, deferStates)}. - Return {pending}, {newFutures}, and {deferStates}. -FilterDeferredFutures(deferStates, futures): +GetStreamFutures(deferStates, futures): - Initialize {streamFutures} to an empty list. -- Let {deferState} be a new unordered map containing all entries in +- Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. - For each {future} of {futures}. - If {future} incrementally completes a stream: @@ -829,25 +852,22 @@ FilterDeferredFutures(deferStates, futures): - For each {deferredFragment} of {deferredFragments}: - Let {deferState} be the entry in {deferStates} for {deferredFragment}. - If {deferState} is not defined: - - Let {pendingFutures} be a new set containing {future}. - - Let {count} be {1}. - - Let {newDeferState} be a new unordered map containing {pendingFutures} - and {count}. + - Let {unreleasedFutures} be a new list containing {future}. + - Let {newDeferState} be a new unordered map containing + {unreleasedFutures}. - Otherwise: - Let {newDeferState} be a new unordered map containing all entries in {deferState}. - - Reset {pendingFutures} on {newDeferState} to a set containing all of its - original members as well as {future}. - - Increment {count} on {newDeferState}. + - Reset {unreleasedFutures} on {newDeferState} to a new list containing + all of the original items, appended by {future}. - Set the entry for {deferredFragment} in {deferStates} to {newDeferState}. - Return {streamFutures} and {deferStates}. -FilterDoublyDeferredFragments(newPendingResults, originalDeferStates): +GetSinglyDeferredFutures(newPendingResults, originalDeferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. - Initialize {pending} and {newFutures} to empty lists. -- Initialize {newFutures} to the empty set. - For each {newPendingResult} in {newPendingResults}: - If {newPendingResult} will incrementally complete a stream: - Append {newPendingResult} to {pending}. @@ -859,9 +879,9 @@ FilterDoublyDeferredFragments(newPendingResults, originalDeferStates): deferStates)}. - If {parent} is not defined: - Append {newPendingResult} to {pending}. - - Let {deferState} be the entry in {deferStates} for {newPendingResult}. - - Let {pendingFutures} be the corresponding entry on {deferState}. - - Append all items in {pendingFutures} to {newFutures}. + - Let {unreleasedFutures} and {deferState} be the result of + {ReleaseFragment(newPendingResult,deferStates)}. + - Append all of the items in {unreleasedFutures} to {newFutures}. - Continue to the next {newPendingResult} in {newPendingResults}. - Let {parentDeferState} be the entry in {deferStates} for {parent}. - Let {newParentDeferState} be a new unordered map containing all entries in @@ -870,15 +890,29 @@ FilterDoublyDeferredFragments(newPendingResults, originalDeferStates): - Let {newChildren} be a new list containing all members of {children} on {newParentDeferState} as well as {newPendingResult}. - Set the entry for {children} in {newParentDeferState} to {newChildren}. -- Return {pending} and {deferStates}. +- Return {pending}, {newFutures}, and {deferStates}. -GetNonEmptyParent(deferredFragment, futuresByFragment): +ReleaseFragment(deferredFragment, deferStates): + +- Let {deferStates} be a new unordered map containing all entries in + {originalDeferStates}. +- Let {deferState} be the entry in {deferStates} for {newPendingResult}. +- Let {unreleasedFutures} be the corresponding entry on {deferState}. +- Let {pendingFutures} be a new list containing all members of {pendingFutures} + on {deferState}. +- Append all of the items in {unreleasedFutures} to {pendingFutures}. +- Reset {unreleasedFutures} on {deferState} to an empty list. +- Set the corresponding entry on {deferState} to {pendingFutures}. +- Return {unreleasedFutures} and {deferStates}. + +GetNonEmptyParent(deferredFragment, deferStates): - Let {parent} be the corresponding entry on {deferredFragment}. - If {parent} is not defined, return. -- Let {parentFutures} be the entry for {parent} on {futuresByFragment}. -- If {parentFutures} is not defined, return the result of - {GetNonEmptyParent(parent, futuresByFragment)}. +- Let {parentDeferState} be the entry for {parent} on {deferStates}. +- Let {futures} be the corresponding entry on {parentDeferState}. +- If {futures} is empty, return the result of {GetNonEmptyParent(parent, + deferStates)}. - Return {parent}. GetUpdateForStreamItems(originalDeferStates, completedFuture): @@ -907,12 +941,13 @@ GetUpdateForStreamItems(originalDeferStates, completedFuture): {FilterDefers(newPendingResults, futures, originalDeferStates)}. - Return {update}, {pending}, {newFutures}, and {deferStates}. -GetUpdateForDeferredResult(originalDeferStates, completedFuture): +GetUpdateForDeferredResult(originalFutureStates, originalDeferStates, +completedFuture): -- Let {deferredResult} be the result of {completedFuture}. -- Let {deferStates} be a new unordered map containing all the entries in - {originalDeferStates}. +- Let {futureStates} and {deferStates} be a new unordered map containing all the + entries in {originalFutureStates} and {originalDeferStates}, respectively. - Initialize {newFutures} to the empty set. +- Let {deferredResult} be the result of {completedFuture}. - Let {deferredFragments}, {data}, and {errors} be the corresponding entries on {deferredResult}. - If {data} is {null}: @@ -921,8 +956,8 @@ GetUpdateForDeferredResult(originalDeferStates, completedFuture): - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - If {deferState} is not defined, continue to the next {deferredFragment} of {deferredFragments}. - - Let {deferStates} be the result of {RemoveFragment(deferredFragment, - deferState, deferStates)}. + - Let {futureStates} and {deferStates} be the result of + {RemoveFragment(deferredFragment, deferState, futureStates, deferStates)}. - Let {completedEntry} be an unordered map containing {errors}. - Set the entry for {pendingResult} in {completedEntry} to {deferredFragment}. @@ -936,17 +971,17 @@ GetUpdateForDeferredResult(originalDeferStates, completedFuture): {deferredFragments}. - Let {newDeferState} be a new unordered map containing all entries on {deferState}. - - Decrement {count} on {newDeferState}. - - Let {newCompletedFutures} be a new set containing all members of - {completedFutures} on {newDeferState}. + - Let {newCompletedFutures} be a new list containing all members of + {completedFutures} on {newDeferState}, appended by {completedFuture}. - Set the {completedFutures} entry on {newDeferState} to {newCompletedFutures}. - - Add {completedFuture} to {newCompletedFutures}. - - Let {count} be the corresponding entry on {newDeferState}. - - If {count} is {0}: + - Let {futures} be the corresponding entries on {newDeferState}. + - If the size of {newCompletedFutures} is equal to the size of + {pendingFutures}: - Let {deferStates}, {fragmentPending}, {fragmentIncremental}, {fragmentCompleted}, and {fragmentNewFutures} be the result of - {CompleteFragment(deferredFragment, deferState, deferStates)}. + {CompleteFragment(deferredFragment, deferState, futureStates, + deferStates)}. - Append all items in {fragmentPending}, {fragmentIncremental}, and {fragmentCompleted} to {pending}, {incremental}, and {completed}, respectively. @@ -955,21 +990,32 @@ GetUpdateForDeferredResult(originalDeferStates, completedFuture): {completed}. - Return {update}, {newFutures}, and {deferStates}. -RemoveFragment(deferredFragment, deferState, originalDeferStates): +RemoveFragment(deferredFragment, deferState, originalFutureStates, +originalDeferStates): -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. +- Let {futureStates} and {deferStates} be a new unordered map containing all the + entries in {originalFutureStates} and {originalDeferStates}, respectively. - Remove the entry for {deferredFragment} on {deferStates}. -- Let {children} be the corresponding entry on {deferState}. +- Let {futures} and {children} be the corresponding entry on {deferState}. +- For each {future} of {futures}: + - Let {futureState} be the entry for {future} on {futureStates}. + - Let {newFutureState} be a new unordered map containing all entries in + {futureState}. + - Reset {defers} on {newFutureState} to a new set containing all of the + original members except for {deferredFragment}. + - If {defers} on {futureState} is empty, remove the entry for {future} in + {futureStates}. + - Otherwise, set the entry for {future} in {futureStates} to {newFutureState}. - For each {child} of {children}: - Let {childDeferState} be the entry on {deferStates} for {child}. - If {childDeferState} is not defined, continue to the next {child} of {children}. - Let {deferStates} be the result of {RemoveFragment(child, childDeferState, - deferStates)}. -- Return {deferStates}. + futureStates, deferStates)}. +- Return {futureStates} and {deferStates}. -CompleteFragment(deferredFragment, deferState, originalDeferStates): +CompleteFragment(deferredFragment, deferState, futureStates, +originalDeferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. @@ -978,6 +1024,17 @@ CompleteFragment(deferredFragment, deferState, originalDeferStates): - Initialize {pending}, {incremental}, and {completed} to empty lists. - Initialize {newFutures} to the empty set. - For each {completedFuture} in {completedFutures}: + - Let {futureState} be the entry for {completedFuture} in {futureStates}. + - Let {sent} be the corresponding entry on {futureState}. + - If {sent} is {true}, continue to the next {completedFuture} in + {completedFutures}. + - Let {newFutureState} be a new unordered map containing all entries in + {futureState}. + - Set the entry for {sent} in {newFutureState} to {true}. + - Set the entry for {completedFuture} in {futureStates} to {newFutureState}. + - Decrement the entry for {count} on {futureState}. + - If {count} on {futureState} is {0}, remove the entry for {completedFuture} + from {futureStates}. - Let {deferredResult} be the result of {completedFuture}. - Append {deferredResult} to {incremental}. - Let {newPendingResults} and {futures} be the corresponding entries on @@ -986,36 +1043,26 @@ CompleteFragment(deferredFragment, deferState, originalDeferStates): {FilterDefers(newPendingResults, futures, deferStates)}. - Append all items in {resultPending} to {pending}. - Add all items in {resultNewFutures} to {newFutures}. - - Let {deferState} be the result of {RemoveFuture(completedFuture, - deferStates)}. - Append {deferredFragment} to {completed}. - Let {children} be the corresponding entry on {deferState}. - Append all items in {children} to {pending}. - For each {child} of {children}: + - Let {unreleasedFutures} and {deferStates} be the result of {ReleaseFragment( + child, deferStates)}. + - Append all items in {unreleasedFutures} to {newFutures}. - Let {childDeferState} be the entry for {child} on {deferStates}. - - Let {deferStates}, {childPending}, {childIncremental}, {childCompleted}, and - {childNewFutures} be the result of {CompleteFragment(child, childDeferState, - deferStates)}. - - Append all items in {childPending}, {childIncremental}, and {childCompleted} - to {pending}, {incremental}, and {completed}, respectively. - - Add all items in {childNewFutures} to {newFutures}. + - Let {pendingFutures} and {completedFutures} be the corresponding entries on + {childDeferState}. + - If the size of {pendingFutures} is equal to the size of {completedFutures}: + - Let {deferStates}, {childPending}, {childIncremental}, {childCompleted}, + and {childNewFutures} be the result of {CompleteFragment(child, + childDeferState, deferStates)}. + - Append all items in {childPending}, {childIncremental}, and + {childCompleted} to {pending}, {incremental}, and {completed}, + respectively. + - Add all items in {childNewFutures} to {newFutures}. - Return {deferStates}, {pending}, {incremental}, {completed}, and {newFutures}. -RemoveFuture(completedFuture, deferStates): - -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- Let {deferredResult} be the result of {completedFuture}. -- Let {deferredFragments} be the corresponding entry on {deferredResult}. -- For each {deferredFragment} in {deferredFragments}: - - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - - If {deferState} is not defined, continue to the next {deferredFragment} of - {deferredFragments}. - - Reset {pendingFutures} and {completedFutures} on {deferState} to new sets - containing all of their original members, respectively, except for - {completedFuture}. -- Return {deferStates}. - ## Executing a Field Plan To execute a field plan, the object value being evaluated and the object type From 19a4757ad172b1ef0b9f0e3777e25b169a9c4247 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 19 Jan 2024 11:00:24 +0200 Subject: [PATCH 22/46] small fix with regard to update packaging --- spec/Section 6 -- Execution.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 75884438d..e01691dae 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -808,20 +808,19 @@ originalDeferStates, originalNewFutures, originalUpdate). - Remove the entry for {completedFuture} from {futureStates}. - Return {update}, {pending}, {newFutures}, and {deferStates}. - Otherwise, if {completedFuture} incrementally completes a stream: - - Let {resultUpdate}, {resultPending}, {resultNewFutures}, and {deferStates} - be the result of {GetUpdateForStreamItems(deferStates, completedFuture)}. + - Let {resultUpdate}, {resultNewFutures}, and {deferStates} be the result of + {GetUpdateForStreamItems(deferStates, completedFuture)}. - Remove the entry for {completedFuture} from {futureStates}. - Otherwise: - Let {sent} be the corresponding entry on {futureState}. - If {sent} is {true}, continue to the next {completedFuture} in {completedFutures}. - - Let {resultUpdate}, {resultPending}, {resultNewFutures}, {futureStates}, - and {deferStates} be the result of - {GetUpdateForDeferredResult(futureStates, deferStates, completedFuture)}. - - Append all items in {resultPending} to {pending}. + - Let {resultUpdate}, {resultNewFutures}, {futureStates}, and {deferStates} + be the result of {GetUpdateForDeferredResult(futureStates, deferStates, + completedFuture)}. - Add all items in {resultNewFutures} to {newFutures}. - - Append all of the items in {incremental} and {completed} on {resultUpdate} - to {incremental} and {completed}, respectively. + - Append all of the items in {pending}, {incremental}, and {completed} on + {resultUpdate} to {pending}, {incremental}, and {completed}, respectively. - Let {newCompletedFutures} be the completed futures from {newFutures}. - Let {update} be a new unordered map containing {pending}, {incremental}, and {completed}. @@ -939,7 +938,8 @@ GetUpdateForStreamItems(originalDeferStates, completedFuture): {streamItems}. - Let {pending}, {newFutures}, and {deferStates} be the result of {FilterDefers(newPendingResults, futures, originalDeferStates)}. -- Return {update}, {pending}, {newFutures}, and {deferStates}. + - Set the corresponding entry on {update} to {pending}. +- Return {pending}, {newFutures}, and {deferStates}. GetUpdateForDeferredResult(originalFutureStates, originalDeferStates, completedFuture): From a9e2d875851063a0cccdfdfed45618fc8221ad45 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 19 Jan 2024 11:01:39 +0200 Subject: [PATCH 23/46] editorial change --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index e01691dae..9649a3269 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -780,8 +780,8 @@ possibly: - Contributing data for the next payload. - Containing additional pending results or futures. -When encountering completed futures, {ProcessCompletedFutures()} calls itself -recursively on any new futures in case they have been completed. +{ProcessCompletedFutures()} may calls itself recursively on any new futures in +the event that they have completed. ProcessCompletedFutures(completedFutures, originalFutureStates, originalDeferStates, originalNewFutures, originalUpdate). From f43b57f5d3f862a315dfddff2c1e3a57f90c03ef Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 19 Jan 2024 16:19:01 +0200 Subject: [PATCH 24/46] Fix typo --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 9649a3269..3208d9a74 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1105,7 +1105,7 @@ GetNewDeferredFragments(newDeferUsages, path, deferMap): - Let {newDeferMap} be a new unordered map of Defer Usage records to Deferred Fragment records containing all of the entries in {deferMap}. - For each {deferUsage} in {newDeferUsages}: - - Let {parentDeferUsage} be the corresponding entry on {deferUsage.} + - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. - Let {label} be the corresponding entry on {deferUsage}. - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} From 986853f14b22eb6bd280f1a26d2cfa9b8777ea11 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:16:14 +0200 Subject: [PATCH 25/46] use some initialization magic --- spec/Section 6 -- Execution.md | 32 +++++++++++++++++++------------- 1 file changed, 19 insertions(+), 13 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 3208d9a74..c3deeffdd 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -339,6 +339,7 @@ serial): - Let {future} be the future result of {ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial)}. +- Initiate {future}. - Let {futures} be a list containing {future}. - Let {incrementalResults} be the result of {YieldIncrementalResults(futures)}. - Wait for the first result in {incrementalResults} to be available. @@ -745,7 +746,6 @@ YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): - Let {futureStates} be a new unordered map containing all entries in {originalFutureStates}. - For each {future} in {newFutures}: - - If {future} is not initiated, initiate it. - Let {futureState} be the entry for {future} in {futureStates}. - If {futureState} is not defined: - Let {futureState} be a new unordered map. @@ -1078,21 +1078,19 @@ variableValues, serial, path, deferUsageSet, deferMap): {fieldPlan}. - Let {newPendingResults} and {newDeferMap} be the result of {GetNewDeferredFragments(newDeferUsages, path, deferMap)}. -- Let {supplementalFutures} be the result of {GetFutures(objectType, - objectValue, variableValues, newGroupedFieldSets, path, newDeferMap)}. -- Let {deferredFutures} be the result of {GetFutures(objectType, objectValue, - variableValues, newGroupedFieldSetsRequiringDeferral, path, newDeferMap)}. -- Let {futures} be a list containing all members of {supplementalFutures} and - {deferredFutures}. - Allowing for parallelization, perform the following steps: - Let {data}, {newPendingResults}, and {nestedFutures} be the result of running {ExecuteGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, path, deferUsageSet, newDeferMap)} _serially_ if {serial} is {true}, _normally_ (allowing parallelization) otherwise. - - Initiate all futures in {supplementalFutures}. - - If early execution of deferred fields is desired, following any - implementation specific deferral of further execution, initiate all futures - in {deferredFutures}. + - Let {supplementalFutures} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, true, path, newDeferMap)}. + - Let {deferredFutures} be the result of + {ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, + newGroupedFieldSets, false, path, newDeferMap)}. +- Let {futures} be a list containing all members of {supplementalFutures} and + {deferredFutures}. - Append all items in {nestedNewPendingResults} and {nestedFutures} to {newPendingResults} and {futures}. - Return {data}, {newPendingResults}, and {futures}. @@ -1114,8 +1112,8 @@ GetNewDeferredFragments(newDeferUsages, path, deferMap): - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. - Return {newDeferredFragments} and {newDeferMap}. -GetFutures(objectType, objectValue, variableValues, newGroupedFieldSets, path, -deferMap): +ExecuteDeferredGroupedFieldSets(objectType, objectValue, variableValues, +newGroupedFieldSets, supplemental, path, deferMap): - Initialize {futures} to an empty list. - For each {deferUsageSet} and {groupedFieldSet} in {newGroupedFieldSets}: @@ -1127,6 +1125,14 @@ deferMap): {ExecuteDeferredGroupedFieldSet(groupedFieldSet, objectType, objectValue, variableValues, deferredFragments, path, deferUsageSet, deferMap)}, incrementally completing {deferredFragments}. + - Let {deferredFragments} be the list of Deferred Fragments incrementally + completed by {future}. + - If {supplemental} is {true} and any Deferred Fragment in + {deferredFragments} has been released as pending, initiate {future}. + - Otherwise, initiate {future} as soon as any Deferred Fragment in + {deferredFragments} is released as pending, or, if early execution of + deferred fields is desired, following any implementation specific deferral + of further execution. - Append {future} to {futures}. - Return {futures}. From 77c78467d8859896302e3355ac69cdbcd0c357fa Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:17:16 +0200 Subject: [PATCH 26/46] Remove GetNonEmptyParent --- spec/Section 6 -- Execution.md | 30 +++++++++++++++--------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c3deeffdd..8421b324a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -874,13 +874,14 @@ GetSinglyDeferredFutures(newPendingResults, originalDeferStates): - Let {deferState} be the entry in {deferStates} for {newPendingResult}. - If {deferState} is not defined: - Continue to the next {newPendingResult} in {newPendingResults}. - - Let {parent} be the result of {GetNonEmptyParent(newPendingResult, - deferStates)}. + - Let {parent} be the corresponding entry on {newPendingResult}. - If {parent} is not defined: + - Let {futuresToRelease} and {deferState} be the result of + {ReleaseFragment(newPendingResult, deferStates)}. + - If {futuresToRelease} is empty: + - Continue to the next {newPendingResult} in {newPendingResults}. - Append {newPendingResult} to {pending}. - - Let {unreleasedFutures} and {deferState} be the result of - {ReleaseFragment(newPendingResult,deferStates)}. - - Append all of the items in {unreleasedFutures} to {newFutures}. + - Append all of the items in {futuresToRelease} to {newFutures}. - Continue to the next {newPendingResult} in {newPendingResults}. - Let {parentDeferState} be the entry in {deferStates} for {parent}. - Let {newParentDeferState} be a new unordered map containing all entries in @@ -896,7 +897,16 @@ ReleaseFragment(deferredFragment, deferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. - Let {deferState} be the entry in {deferStates} for {newPendingResult}. +- Remove the entry for {deferredFragment} on {deferStates}. - Let {unreleasedFutures} be the corresponding entry on {deferState}. +- If {unreleasedFutures} is empty: + - Let {children} be the corresponding entry on {deferState}. + - Initialize {futuresToRelease} to an empty list. + - For each {child} of {children}: + - Let {childFuturesToRelease} and {deferStates} be the result of + ReleaseFragment(child, deferStates). + - Append all of the items in {childFuturesToRelease} to {futuresToRelease}. + - Return {futuresToRelease} and {deferStates}. - Let {pendingFutures} be a new list containing all members of {pendingFutures} on {deferState}. - Append all of the items in {unreleasedFutures} to {pendingFutures}. @@ -904,16 +914,6 @@ ReleaseFragment(deferredFragment, deferStates): - Set the corresponding entry on {deferState} to {pendingFutures}. - Return {unreleasedFutures} and {deferStates}. -GetNonEmptyParent(deferredFragment, deferStates): - -- Let {parent} be the corresponding entry on {deferredFragment}. -- If {parent} is not defined, return. -- Let {parentDeferState} be the entry for {parent} on {deferStates}. -- Let {futures} be the corresponding entry on {parentDeferState}. -- If {futures} is empty, return the result of {GetNonEmptyParent(parent, - deferStates)}. -- Return {parent}. - GetUpdateForStreamItems(originalDeferStates, completedFuture): - Let {streamItems} be the result of {completedFuture}. From 69c9f5f000524e9b84a0e7d0f6160a20edcca4c2 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:19:38 +0200 Subject: [PATCH 27/46] this should never be previously defined, as each future is returned exactly once --- spec/Section 6 -- Execution.md | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 8421b324a..6540395e2 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -746,14 +746,12 @@ YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): - Let {futureStates} be a new unordered map containing all entries in {originalFutureStates}. - For each {future} in {newFutures}: - - Let {futureState} be the entry for {future} in {futureStates}. - - If {futureState} is not defined: - - Let {futureState} be a new unordered map. - - If {futureState} incrementally completes Deferred Fragments: - - Let {defers} be those Deferred Fragments. - - Let {count} be {0}. - - Set the corresponding entries on {futureState} to {count} and {defers}. - - Set the entry for {future} in {futureStates} to {futureState}. + - Let {futureState} be a new unordered map. + - If {futureState} incrementally completes Deferred Fragments: + - Let {defers} be those Deferred Fragments. + - Let {count} be {0}. + - Set the corresponding entries on {futureState} to {count} and {defers}. + - Set the entry for {future} in {futureStates} to {futureState}. - Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. - Wait for any futures within {maybeCompletedFutures} to complete. - Let {completedFutures} be those completed futures. From 0cf445e4aec801e8d93b5b0c12364746daf2064a Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:20:07 +0200 Subject: [PATCH 28/46] fix mistake when setting count we start at the number of deferred fragments and then go down --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6540395e2..0dca13f76 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -749,7 +749,7 @@ YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): - Let {futureState} be a new unordered map. - If {futureState} incrementally completes Deferred Fragments: - Let {defers} be those Deferred Fragments. - - Let {count} be {0}. + - Let {count} be the length of {defers}. - Set the corresponding entries on {futureState} to {count} and {defers}. - Set the entry for {future} in {futureStates} to {futureState}. - Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. From fa670305d04661877f64488bcb0e5045a72a77d2 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:20:57 +0200 Subject: [PATCH 29/46] rename {defers} to {deferredFragments} --- spec/Section 6 -- Execution.md | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 0dca13f76..04224aa89 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -748,9 +748,10 @@ YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): - For each {future} in {newFutures}: - Let {futureState} be a new unordered map. - If {futureState} incrementally completes Deferred Fragments: - - Let {defers} be those Deferred Fragments. - - Let {count} be the length of {defers}. - - Set the corresponding entries on {futureState} to {count} and {defers}. + - Let {deferredFragments} be those Deferred Fragments. + - Let {count} be the length of {deferredFragments}. + - Set the corresponding entries on {futureState} to {count} and + {deferredFragments}. - Set the entry for {future} in {futureStates} to {futureState}. - Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. - Wait for any futures within {maybeCompletedFutures} to complete. @@ -999,10 +1000,10 @@ originalDeferStates): - Let {futureState} be the entry for {future} on {futureStates}. - Let {newFutureState} be a new unordered map containing all entries in {futureState}. - - Reset {defers} on {newFutureState} to a new set containing all of the - original members except for {deferredFragment}. - - If {defers} on {futureState} is empty, remove the entry for {future} in - {futureStates}. + - Reset {deferredFragments} on {newFutureState} to a new set containing all of + the original members except for {deferredFragment}. + - If {deferredFragments} on {futureState} is empty, remove the entry for + {future} in {futureStates}. - Otherwise, set the entry for {future} in {futureStates} to {newFutureState}. - For each {child} of {children}: - Let {childDeferState} be the entry on {deferStates} for {child}. From 7c0fccf5aa0477594a595e75c1c5daa5d7b70705 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:21:59 +0200 Subject: [PATCH 30/46] a child defer state is always defined unless removed here --- spec/Section 6 -- Execution.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 04224aa89..4af404674 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1007,8 +1007,6 @@ originalDeferStates): - Otherwise, set the entry for {future} in {futureStates} to {newFutureState}. - For each {child} of {children}: - Let {childDeferState} be the entry on {deferStates} for {child}. - - If {childDeferState} is not defined, continue to the next {child} of - {children}. - Let {deferStates} be the result of {RemoveFragment(child, childDeferState, futureStates, deferStates)}. - Return {futureStates} and {deferStates}. From 19fbef68e8cbe725413d962bccb1f07d2ea668c2 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:37:12 +0200 Subject: [PATCH 31/46] get rid of count and sent --- spec/Section 6 -- Execution.md | 26 ++++++++++++-------------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 4af404674..a9c6a1334 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -748,10 +748,9 @@ YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): - For each {future} in {newFutures}: - Let {futureState} be a new unordered map. - If {futureState} incrementally completes Deferred Fragments: - - Let {deferredFragments} be those Deferred Fragments. - - Let {count} be the length of {deferredFragments}. - - Set the corresponding entries on {futureState} to {count} and - {deferredFragments}. + - Let {pendingDeferredFragments} be those Deferred Fragments. + - Set the corresponding entry on {futureState} to + {pendingDeferredFragments}. - Set the entry for {future} in {futureStates} to {futureState}. - Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. - Wait for any futures within {maybeCompletedFutures} to complete. @@ -1000,10 +999,10 @@ originalDeferStates): - Let {futureState} be the entry for {future} on {futureStates}. - Let {newFutureState} be a new unordered map containing all entries in {futureState}. - - Reset {deferredFragments} on {newFutureState} to a new set containing all of - the original members except for {deferredFragment}. - - If {deferredFragments} on {futureState} is empty, remove the entry for - {future} in {futureStates}. + - Reset {pendingDeferredFragments} on {newFutureState} to a new set containing + all of the original members except for {deferredFragment}. + - If {pendingDeferredFragments} on {futureState} is empty, remove the entry + for {future} in {futureStates}. - Otherwise, set the entry for {future} in {futureStates} to {newFutureState}. - For each {child} of {children}: - Let {childDeferState} be the entry on {deferStates} for {child}. @@ -1022,16 +1021,15 @@ originalDeferStates): - Initialize {newFutures} to the empty set. - For each {completedFuture} in {completedFutures}: - Let {futureState} be the entry for {completedFuture} in {futureStates}. - - Let {sent} be the corresponding entry on {futureState}. - - If {sent} is {true}, continue to the next {completedFuture} in + - If {futureState} is not defined, continue to the next {completedFuture} in {completedFutures}. - Let {newFutureState} be a new unordered map containing all entries in {futureState}. - - Set the entry for {sent} in {newFutureState} to {true}. - Set the entry for {completedFuture} in {futureStates} to {newFutureState}. - - Decrement the entry for {count} on {futureState}. - - If {count} on {futureState} is {0}, remove the entry for {completedFuture} - from {futureStates}. + - Reset {pendingDeferredFragments} on {newFutureState} to a new set containing + all of the original members except for {deferredFragment}. + - If {pendingDeferredFragments} is empty, remove the entry for + {completedFuture} from {futureStates}. - Let {deferredResult} be the result of {completedFuture}. - Append {deferredResult} to {incremental}. - Let {newPendingResults} and {futures} be the corresponding entries on From 3a14539897d5c00ba87a9da93c73cb77d6aef862 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:39:15 +0200 Subject: [PATCH 32/46] Rename GetSinglyDeferredFutures to FilterNestedFutures --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index a9c6a1334..c896c2925 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -832,7 +832,7 @@ FilterDefers(newPendingResults, futures, originalDeferStates): - Let {streamFutures} and {deferStates} be the result of GetStreamFutures(originalDeferStates, futures). - Let {pending}, {newFutures}, and {deferStates} be the result of - {GetSinglyDeferredFutures(newPendingResults, deferStates)}. + {FilterNestedDefers(newPendingResults, deferStates)}. - Return {pending}, {newFutures}, and {deferStates}. GetStreamFutures(deferStates, futures): @@ -860,7 +860,7 @@ GetStreamFutures(deferStates, futures): - Set the entry for {deferredFragment} in {deferStates} to {newDeferState}. - Return {streamFutures} and {deferStates}. -GetSinglyDeferredFutures(newPendingResults, originalDeferStates): +FilterNestedDefers(newPendingResults, originalDeferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. From 0d91637af512d51490f8f9ef1b79e9ce8781ae7d Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Fri, 26 Jan 2024 15:41:44 +0200 Subject: [PATCH 33/46] add some more magic --- spec/Section 6 -- Execution.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c896c2925..15fda43c7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -833,6 +833,8 @@ FilterDefers(newPendingResults, futures, originalDeferStates): GetStreamFutures(originalDeferStates, futures). - Let {pending}, {newFutures}, and {deferStates} be the result of {FilterNestedDefers(newPendingResults, deferStates)}. +- Notify the executor as necessary that all items in {pending} have been + released. - Return {pending}, {newFutures}, and {deferStates}. GetStreamFutures(deferStates, futures): From d7906cfcf9b7e0c216de18046135449702068edc Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sat, 27 Jan 2024 22:43:19 +0200 Subject: [PATCH 34/46] typo --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 15fda43c7..74721c3e6 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -975,7 +975,7 @@ completedFuture): {completedFutures} on {newDeferState}, appended by {completedFuture}. - Set the {completedFutures} entry on {newDeferState} to {newCompletedFutures}. - - Let {futures} be the corresponding entries on {newDeferState}. + - Let {pendingFutures} be the corresponding entries on {newDeferState}. - If the size of {newCompletedFutures} is equal to the size of {pendingFutures}: - Let {deferStates}, {fragmentPending}, {fragmentIncremental}, @@ -996,8 +996,8 @@ originalDeferStates): - Let {futureStates} and {deferStates} be a new unordered map containing all the entries in {originalFutureStates} and {originalDeferStates}, respectively. - Remove the entry for {deferredFragment} on {deferStates}. -- Let {futures} and {children} be the corresponding entry on {deferState}. -- For each {future} of {futures}: +- Let {pendingFutures} and {children} be the corresponding entry on {deferState}. +- For each {future} of {pendingFutures}: - Let {futureState} be the entry for {future} on {futureStates}. - Let {newFutureState} be a new unordered map containing all entries in {futureState}. From 451119051f63baf65a7662baec91f02a5a56729d Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sun, 28 Jan 2024 06:15:37 +0200 Subject: [PATCH 35/46] refine ReleaseFragment --- spec/Section 6 -- Execution.md | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 74721c3e6..c2af77b02 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -897,22 +897,21 @@ ReleaseFragment(deferredFragment, deferStates): - Let {deferStates} be a new unordered map containing all entries in {originalDeferStates}. - Let {deferState} be the entry in {deferStates} for {newPendingResult}. -- Remove the entry for {deferredFragment} on {deferStates}. +- Initialize {futuresToRelease} to an empty list. - Let {unreleasedFutures} be the corresponding entry on {deferState}. -- If {unreleasedFutures} is empty: +- Append all items in {unreleasedFutures} to {futuresToRelease}. +- Let {pendingFutures} be a new list containing all members of {pendingFutures} + on {deferState}. +- Append all of the items in {unreleasedFutures} to {pendingFutures}. +- Set the corresponding entry on {deferState} to {pendingFutures}. +- If {pendingFutures} is empty: + - Remove the entry for {deferredFragment} on {deferStates}. - Let {children} be the corresponding entry on {deferState}. - - Initialize {futuresToRelease} to an empty list. - For each {child} of {children}: - Let {childFuturesToRelease} and {deferStates} be the result of ReleaseFragment(child, deferStates). - Append all of the items in {childFuturesToRelease} to {futuresToRelease}. - - Return {futuresToRelease} and {deferStates}. -- Let {pendingFutures} be a new list containing all members of {pendingFutures} - on {deferState}. -- Append all of the items in {unreleasedFutures} to {pendingFutures}. -- Reset {unreleasedFutures} on {deferState} to an empty list. -- Set the corresponding entry on {deferState} to {pendingFutures}. -- Return {unreleasedFutures} and {deferStates}. +- Return {futuresToRelease} and {deferStates}. GetUpdateForStreamItems(originalDeferStates, completedFuture): @@ -996,7 +995,8 @@ originalDeferStates): - Let {futureStates} and {deferStates} be a new unordered map containing all the entries in {originalFutureStates} and {originalDeferStates}, respectively. - Remove the entry for {deferredFragment} on {deferStates}. -- Let {pendingFutures} and {children} be the corresponding entry on {deferState}. +- Let {pendingFutures} and {children} be the corresponding entry on + {deferState}. - For each {future} of {pendingFutures}: - Let {futureState} be the entry for {future} on {futureStates}. - Let {newFutureState} be a new unordered map containing all entries in From a2896167e12e8e0c08a8ca389c5b9e006252dfcc Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Sun, 28 Jan 2024 06:18:22 +0200 Subject: [PATCH 36/46] complete renaming --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index c2af77b02..39ac78178 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1044,9 +1044,9 @@ originalDeferStates): - Let {children} be the corresponding entry on {deferState}. - Append all items in {children} to {pending}. - For each {child} of {children}: - - Let {unreleasedFutures} and {deferStates} be the result of {ReleaseFragment( + - Let {futuresToRelease} and {deferStates} be the result of {ReleaseFragment( child, deferStates)}. - - Append all items in {unreleasedFutures} to {newFutures}. + - Append all items in {futuresToRelease} to {newFutures}. - Let {childDeferState} be the entry for {child} on {deferStates}. - Let {pendingFutures} and {completedFutures} be the corresponding entries on {childDeferState}. From 63cf6c2d4ece20fb9d6a4b2847eea9caf7e59fd8 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 29 Jan 2024 12:46:58 +0200 Subject: [PATCH 37/46] fix typo, changing ancestors to parent --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 39ac78178..bac00e359 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -1103,7 +1103,7 @@ GetNewDeferredFragments(newDeferUsages, path, deferMap): - Let {parentDeferUsage} be the corresponding entry on {deferUsage}. - Let {parent} be the entry in {deferMap} for {parentDeferUsage}. - Let {label} be the corresponding entry on {deferUsage}. - - Let {newDeferredFragment} be an unordered map containing {ancestors}, {path} + - Let {newDeferredFragment} be an unordered map containing {parent}, {path} and {label}. - Append {newDeferredFragment} to {newDeferredFragments}. - Set the entry for {deferUsage} in {newDeferMap} to {newDeferredFragment}. From 5e62f5989c3ec0c8ca3282452215650186808372 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Mon, 29 Jan 2024 14:30:47 +0200 Subject: [PATCH 38/46] move ExecuteInitialResult to later --- spec/Section 6 -- Execution.md | 91 +++++++++++++++++----------------- 1 file changed, 46 insertions(+), 45 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index bac00e359..637dce951 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -337,11 +337,9 @@ initial response. Otherwise, we return just the initial result. ExecuteRootSelectionSet(variableValues, initialValue, objectType, selectionSet, serial): -- Let {future} be the future result of {ExecuteInitialResult(variableValues, - initialValue, objectType, selectionSet, serial)}. -- Initiate {future}. -- Let {futures} be a list containing {future}. -- Let {incrementalResults} be the result of {YieldIncrementalResults(futures)}. +- Let {incrementalResults} be the result of + {YieldIncrementalResults(variableValues, initialValue, objectType, + selectionSet, serial)}. - Wait for the first result in {incrementalResults} to be available. - Let {initialResult} be that result. - Let {initialResponse} and {ids} be the result of @@ -350,22 +348,6 @@ serial): {MapSubsequentResultToResponse(incrementalResult, ids)}. - Return {initialResponse} and {subsequentResponses}. -ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, -serial): - -- If {serial} is not provided, initialize it to {false}. -- Let {groupedFieldSet} and {newDeferUsages} be the result of - {CollectFields(objectType, selectionSet, variableValues)}. -- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. -- Let {data}, {newPendingResults}, and {futures} be the result of - {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, - variableValues, serial)}. -- Let {errors} be the list of all _field error_ raised while executing the - {groupedFieldSet}. -- Let {initialResult} be an unordered map consisting of {data}, {errors}, - {newPendingResults}, and {futures}. -- Return {initialResult}. - MapSubsequentResultToResponse(subsequentResultStream, originalIds): - Let {ids} be a new unordered map containing all of the entries in @@ -741,31 +723,50 @@ initiated. Then, any completed deferred or streamed results are processed to determine the payload to be yielded. Finally, if any pending results remain, the procedure is repeated recursively. -YieldIncrementalResults(newFutures, originalFutureStates, originalDeferStates): +YieldIncrementalResults(variableValues, initialValue, objectType, selectionSet, +serial): -- Let {futureStates} be a new unordered map containing all entries in - {originalFutureStates}. -- For each {future} in {newFutures}: - - Let {futureState} be a new unordered map. - - If {futureState} incrementally completes Deferred Fragments: - - Let {pendingDeferredFragments} be those Deferred Fragments. - - Set the corresponding entry on {futureState} to - {pendingDeferredFragments}. - - Set the entry for {future} in {futureStates} to {futureState}. -- Let {maybeCompletedFutures} be the set of keys of {originalFutureStates}. -- Wait for any futures within {maybeCompletedFutures} to complete. -- Let {completedFutures} be those completed futures. -- Let {update}, {newestFutures}, {futureStates}, and {deferStates} be the result - of {ProcessCompletedFutures(completedFutures, originalFutureStates, - originalDeferStates)}. -- Let {data}, {incremental}, and {completed} be the corresponding entries on - {update}. -- If {data} is defined, or if either {incremental} and {completed} are not - empty, yield {update}. -- If {futureStates} is empty, complete this incremental result stream and - return. -- Yield the results of {YieldIncrementalResults(newestFutures, futureStates, - deferStates)}. +- Let {initialFuture} be the future result of + {ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, + serial)}. +- Let {futureStates} be a new unordered map. +- Set the entry for {future} in {futureStates} to {futureState}. +- Initiate {initialFuture}. +- While {futureStates} is not empty: + - Let {maybeCompletedFutures} be the set of keys of {futureStates}. + - Wait for any futures within {maybeCompletedFutures} to complete. + - Let {completedFutures} be those completed futures. + - Let {update}, {newestFutures}, {futureStates}, and {deferStates} be the + result of {ProcessCompletedFutures(completedFutures, originalFutureStates, + originalDeferStates)}. + - For each {future} in {newestFutures}: + - Let {futureState} be a new unordered map. + - If {futureState} incrementally completes Deferred Fragments: + - Let {pendingDeferredFragments} be those Deferred Fragments. + - Set the corresponding entry on {futureState} to + {pendingDeferredFragments}. + - Set the entry for {future} in {futureStates} to {futureState}. + - Let {data}, {incremental}, and {completed} be the corresponding entries on + {update}. + - If {data} is defined, or if either {incremental} and {completed} are not + empty, yield {update}. +- Complete this incremental result stream. + +ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, +serial): + +- If {serial} is not provided, initialize it to {false}. +- Let {groupedFieldSet} and {newDeferUsages} be the result of + {CollectFields(objectType, selectionSet, variableValues)}. +- Let {fieldPlan} be the result of {BuildFieldPlan(groupedFieldSet)}. +- Let {data}, {newPendingResults}, and {futures} be the result of + {ExecuteFieldPlan(newDeferUsages, fieldPlan, objectType, initialValue, + variableValues, serial)}. +- Let {errors} be the list of all _field error_ raised while executing the + {groupedFieldSet}. +- Let {initialResult} be an unordered map consisting of {data}, {errors}, + {newPendingResults}, and {futures}. +- Return {initialResult}. ### Processing Completed Futures From c00963d3415b7a3fd6c4990021fe3e3c86b8cb93 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 02:28:09 +0200 Subject: [PATCH 39/46] simplify --- spec/Section 6 -- Execution.md | 461 +++++++++++---------------------- 1 file changed, 145 insertions(+), 316 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 637dce951..5ab0f67ac 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -718,10 +718,7 @@ IsSameSet(setA, setB): ### Yielding Incremental Results The procedure for yielding incremental results is specified by the -{YieldIncrementalResults()} algorithm. First, any uninitiated executions are -initiated. Then, any completed deferred or streamed results are processed to -determine the payload to be yielded. Finally, if any pending results remain, the -procedure is repeated recursively. +{YieldIncrementalResults()} algorithm. YieldIncrementalResults(variableValues, initialValue, objectType, selectionSet, serial): @@ -729,28 +726,122 @@ serial): - Let {initialFuture} be the future result of {ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial)}. -- Let {futureStates} be a new unordered map. -- Set the entry for {future} in {futureStates} to {futureState}. -- Initiate {initialFuture}. -- While {futureStates} is not empty: - - Let {maybeCompletedFutures} be the set of keys of {futureStates}. - - Wait for any futures within {maybeCompletedFutures} to complete. +- Initialize {pendingResults}, {pendingFutures}, and {unsent} to the empty set. +- Initialize {newPendingResultsByFragment}, {pendingFuturesByFragment}, and + {resultsByFragment} to empty unordered maps. +- Repeat the following steps: + - Wait for any futures within {pendingFutures} to complete. + - Initialize {pending}, {incremental}, and {completed} to empty lists. - Let {completedFutures} be those completed futures. - - Let {update}, {newestFutures}, {futureStates}, and {deferStates} be the - result of {ProcessCompletedFutures(completedFutures, originalFutureStates, - originalDeferStates)}. - - For each {future} in {newestFutures}: - - Let {futureState} be a new unordered map. - - If {futureState} incrementally completes Deferred Fragments: - - Let {pendingDeferredFragments} be those Deferred Fragments. - - Set the corresponding entry on {futureState} to - {pendingDeferredFragments}. - - Set the entry for {future} in {futureStates} to {futureState}. - - Let {data}, {incremental}, and {completed} be the corresponding entries on - {update}. - - If {data} is defined, or if either {incremental} and {completed} are not - empty, yield {update}. -- Complete this incremental result stream. + - For each {future} in {completedFutures}: + - Remove {future} from {pendingFutures}. + - Let {result} be the result of {future}. + - If {result} represents the Initial Result: + - Let {data} and {errors} be the corresponding entries on {result}. + - Otherwise, if {result} incrementally completes a Stream: + - Let {stream}, {items}, and {errors} be the corresponding entries on + {result}. + - If {items} is not defined, the stream has asynchronously ended: + - Let {completedEntry} be an empty unordered map. + - Set the entry for {pendingResult} on {completedEntry} to {stream}. + - Append {completedEntry} to {completed}. + - Remove {stream} from {pendingResults}. + - Otherwise, if {items} is {null}: + - Let {completedEntry} be an unordered map containing {errors}. + - Set the entry for {pendingResult} on {completedEntry} to {stream}. + - Append {completedEntry} to {completed}. + - Remove {stream} from {pendingResults}. + - Otherwise: + - Append {streamItems} to {incremental}. + - Otherwise: + - Let {deferredFragments}, {data}, and {errors} be the corresponding + entries on {result}. + - If {data} is {null}: + - For each {deferredFragment} in {deferredFragments}: + - If {deferredFragment} is not contained by {pendingResults}, continue + to the next {deferredFragment} in {deferredFragments}. + - Let {completedEntry} be an unordered map containing {errors}. + - Set the entry for {pendingResult} on {completedEntry} to + {deferredFragment}. + - Append {completedEntry} to {completed}. + - Remove {deferredFragment} from {pendingResults}. + - Otherwise: + - For each {deferredFragment} in {deferredFragments}: + - If {deferredFragment} is not contained by {pendingResults}, continue + to the next {deferredFragment} in {deferredFragments}. + - Let {resultsForFragment} be the entry for {deferredFragment} in + {resultsByFragment}; if no such list exists, create it as an empty + list. + - Append {result} to {resultsForFragment}. + - Add {result} to {unsent}. + - Let {pendingFuturesForFragment} be the entry for {deferredFragment} + in {pendingFuturesByFragment}. + - If the size of {resultsForFragment} is equal to the size of + {pendingFuturesForFragment}: + - Let {fragmentPending}, {fragmentIncremental}, and + {fragmentCompleted}, be the result of + {CompleteFragment(deferredFragment, resultsForFragment, + pendingFuturesForFragment, newPendingResultsByFragment, + resultsByFragment, unsent)}. + - For each {pendingResult} in {fragmentPending}: + - Append {pendingResult} to {pending}. + - Add {pendingResult} to {pendingResults}. + - For each {result} in {fragmentIncremental}: + - Remove {result} from {unsent}. + - For each {completedEntry} in {completed}: + - Let {pendingResult} be the corresponding entry on + {completedEntry}. + - Remove {pendingResult} from {pendingResults}. + - For each {result} in {incremental}: + - Let {newPendingResults} and {futures} be the corresponding entries on + {incremental}. + - For each {future} of {futures}: + - If {future} represents completion of Stream Items: + - Initiate {future} if it has not yet been initiated. + - Add {future} to {pendingFutures}. + - Otherwise: + - Let {deferredFragments} be the Deferred Fragments completed by + {future}. + - For each {deferredFragment} in {deferredFragments}: + - Let {pendingFuturesForFragment} be the entry for + {deferredFragment} in {pendingFuturesByFragment}; if no such list + exists, create it as an empty list. + - Append {future} to {pendingFuturesForFragment}. + - If {deferredFragment} is contained by {pendingResults}: + - Initiate {future} if it has not yet been initiated. + - Add {future} to {pendingFutures}. + - For each {newPendingResult} of {newPendingResults}: + - If {newPendingResult} represents a Stream: + - Append {newPendingResult} to {pending}. + - Add {newPendingResult} to {pendingResults}. + - Otherwise: + - Let {pendingFuturesForFragment} be the entry for {newPendingResult} + in {pendingFuturesByFragment}; if no such list exists, continue to + the next {newPendingResult} of {newPendingResults}. + - Let {parent} be the corresponding entry on {newPendingResult}. + - If {parent} is not defined or {pendingResults} does not contain + {parent}: + - Append {newPendingResult} to {pending}. + - Add {newPendingResult} to {pendingResults}. + - For each {future} in {pendingFuturesForFragment}: + - Initiate {future} if it has not yet been initiated. + - Add {future} to {pendingFutures}. + - Otherwise: + - Let {newPendingResultsForFragment} be the entry for {parent} in + {newPendingResultsByFragment}; if no such list exists, create it + as an empty list. + - Append {newPendingResult} to {newPendingResultsForFragment}. + - If {pendingResults} is empty, let {hasNext} be {false}, otherwise let it + be {true}. + - If {data} is defined: + - Let {incrementalResult} be a new unordered map containing {data}, + {errors} and {pending}. + - Yield {update}. + - Otherwise, if {incremental} or {completed} is not empty: + - Let {incrementalResult} be a new unordered map containing {pending}, + {incremental}, {completed} and {hasNext}. + - Yield {update}. + - If {hasNext} is {false}, complete this incremental result stream. ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial): @@ -768,298 +859,36 @@ serial): {newPendingResults}, and {futures}. - Return {initialResult}. -### Processing Completed Futures - -As future executions are completed, the {ProcessCompletedFutures()} algorithm -describes how the results of these executions impact the incremental state. -Results from completed futures are processed individually, with each result -possibly: - -- Completing existing pending results. -- Contributing data for the next payload. -- Containing additional pending results or futures. - -{ProcessCompletedFutures()} may calls itself recursively on any new futures in -the event that they have completed. - -ProcessCompletedFutures(completedFutures, originalFutureStates, -originalDeferStates, originalNewFutures, originalUpdate). - -- Let {futureStates} be a new unordered map containing all entries in - {originalFutureStates}. -- Let {deferStates} be {originalDeferStates}. -- Let {pending}, {incremental}, and {completed} be new lists containing all the - items in the corresponding entries on {originalUpdate}. -- Let {newFutures} be a new set containing all members of {originalNewFutures}. -- For each {completedFuture} in {completedFutures}: - - Let {futureState} be the entry for {completedFuture} in {futureStates}. - - If {futureState} is not defined, continue to the next {completedFuture} in - {completedFutures}. - - If {completedFuture} completes the initial result: - - Let {initialResult} be the result of {completedFuture}. - - Let {newPendingResults} and {futures} be the corresponding entries on - {initialResult}. - - Let {pending}, {newFutures}, and {deferStates} be the result of - {FilterDefers(newPendingResults, futures)}. - - Let {data} and {errors} be the corresponding entries on {initialResult}. - - Let {update} be a new unordered map containing {data}, {errors}, and - {pending}. - - Remove the entry for {completedFuture} from {futureStates}. - - Return {update}, {pending}, {newFutures}, and {deferStates}. - - Otherwise, if {completedFuture} incrementally completes a stream: - - Let {resultUpdate}, {resultNewFutures}, and {deferStates} be the result of - {GetUpdateForStreamItems(deferStates, completedFuture)}. - - Remove the entry for {completedFuture} from {futureStates}. - - Otherwise: - - Let {sent} be the corresponding entry on {futureState}. - - If {sent} is {true}, continue to the next {completedFuture} in - {completedFutures}. - - Let {resultUpdate}, {resultNewFutures}, {futureStates}, and {deferStates} - be the result of {GetUpdateForDeferredResult(futureStates, deferStates, - completedFuture)}. - - Add all items in {resultNewFutures} to {newFutures}. - - Append all of the items in {pending}, {incremental}, and {completed} on - {resultUpdate} to {pending}, {incremental}, and {completed}, respectively. -- Let {newCompletedFutures} be the completed futures from {newFutures}. -- Let {update} be a new unordered map containing {pending}, {incremental}, and - {completed}. -- If {newCompletedFutures} is empty: - - Return {update}, {newFutures}, {futureStates}, and {deferStates}. -- Return the result of {ProcessCompletedFutures(newCompletedFutures, - futureStates, deferStates, update)}. - -FilterDefers(newPendingResults, futures, originalDeferStates): - -- Let {streamFutures} and {deferStates} be the result of - GetStreamFutures(originalDeferStates, futures). -- Let {pending}, {newFutures}, and {deferStates} be the result of - {FilterNestedDefers(newPendingResults, deferStates)}. -- Notify the executor as necessary that all items in {pending} have been - released. -- Return {pending}, {newFutures}, and {deferStates}. - -GetStreamFutures(deferStates, futures): - -- Initialize {streamFutures} to an empty list. -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- For each {future} of {futures}. - - If {future} incrementally completes a stream: - - Append {future} to {streamFutures}. - - Continue to the next {future} in {futures}. - - Let {deferredFragments} be a list of the Deferred Fragments incrementally - completed by {future}. - - For each {deferredFragment} of {deferredFragments}: - - Let {deferState} be the entry in {deferStates} for {deferredFragment}. - - If {deferState} is not defined: - - Let {unreleasedFutures} be a new list containing {future}. - - Let {newDeferState} be a new unordered map containing - {unreleasedFutures}. - - Otherwise: - - Let {newDeferState} be a new unordered map containing all entries in - {deferState}. - - Reset {unreleasedFutures} on {newDeferState} to a new list containing - all of the original items, appended by {future}. - - Set the entry for {deferredFragment} in {deferStates} to {newDeferState}. -- Return {streamFutures} and {deferStates}. - -FilterNestedDefers(newPendingResults, originalDeferStates): - -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- Initialize {pending} and {newFutures} to empty lists. -- For each {newPendingResult} in {newPendingResults}: - - If {newPendingResult} will incrementally complete a stream: - - Append {newPendingResult} to {pending}. - - Continue to the next {newPendingResult} in {newPendingResults}. - - Let {deferState} be the entry in {deferStates} for {newPendingResult}. - - If {deferState} is not defined: - - Continue to the next {newPendingResult} in {newPendingResults}. - - Let {parent} be the corresponding entry on {newPendingResult}. - - If {parent} is not defined: - - Let {futuresToRelease} and {deferState} be the result of - {ReleaseFragment(newPendingResult, deferStates)}. - - If {futuresToRelease} is empty: - - Continue to the next {newPendingResult} in {newPendingResults}. - - Append {newPendingResult} to {pending}. - - Append all of the items in {futuresToRelease} to {newFutures}. - - Continue to the next {newPendingResult} in {newPendingResults}. - - Let {parentDeferState} be the entry in {deferStates} for {parent}. - - Let {newParentDeferState} be a new unordered map containing all entries in - {parentDeferState}. - - Set the entry for {parent} in {deferStates} to {newParentDeferState}. - - Let {newChildren} be a new list containing all members of {children} on - {newParentDeferState} as well as {newPendingResult}. - - Set the entry for {children} in {newParentDeferState} to {newChildren}. -- Return {pending}, {newFutures}, and {deferStates}. - -ReleaseFragment(deferredFragment, deferStates): - -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- Let {deferState} be the entry in {deferStates} for {newPendingResult}. -- Initialize {futuresToRelease} to an empty list. -- Let {unreleasedFutures} be the corresponding entry on {deferState}. -- Append all items in {unreleasedFutures} to {futuresToRelease}. -- Let {pendingFutures} be a new list containing all members of {pendingFutures} - on {deferState}. -- Append all of the items in {unreleasedFutures} to {pendingFutures}. -- Set the corresponding entry on {deferState} to {pendingFutures}. -- If {pendingFutures} is empty: - - Remove the entry for {deferredFragment} on {deferStates}. - - Let {children} be the corresponding entry on {deferState}. - - For each {child} of {children}: - - Let {childFuturesToRelease} and {deferStates} be the result of - ReleaseFragment(child, deferStates). - - Append all of the items in {childFuturesToRelease} to {futuresToRelease}. -- Return {futuresToRelease} and {deferStates}. - -GetUpdateForStreamItems(originalDeferStates, completedFuture): - -- Let {streamItems} be the result of {completedFuture}. -- Let {deferStates} be a new unordered map containing all the entries in - {originalDeferStates}. -- Let {stream}, {items}, and {errors} be the corresponding entries on {result}. -- If {items} is not defined, the stream has asynchronously ended: - - Let {completedEntry} be an empty unordered map. - - Set the entry for {pendingResult} on {completedEntry} to {stream}. - - Let {completed} be an list containing {completedEntry}. - - Let {update} be an unordered map containing {completed}. -- Otherwise, if {items} is {null}: - - Let {errors} be the corresponding entry on {streamItems}. - - Let {completedEntry} be an unordered map containing {errors}. - - Set the entry for {pendingResult} on {completedEntry} to {stream}. - - Let {completed} be a list containing {completedEntry}. - - Let {update} be an unordered map containing {completed} and {errors}. -- Otherwise: - - Let {incremental} be a list containing {streamItems}. - - Let {update} be an unordered map containing {incremental}. - - Let {newPendingResults} and {futures} be the corresponding entries on - {streamItems}. - - Let {pending}, {newFutures}, and {deferStates} be the result of - {FilterDefers(newPendingResults, futures, originalDeferStates)}. - - Set the corresponding entry on {update} to {pending}. -- Return {pending}, {newFutures}, and {deferStates}. - -GetUpdateForDeferredResult(originalFutureStates, originalDeferStates, -completedFuture): - -- Let {futureStates} and {deferStates} be a new unordered map containing all the - entries in {originalFutureStates} and {originalDeferStates}, respectively. -- Initialize {newFutures} to the empty set. -- Let {deferredResult} be the result of {completedFuture}. -- Let {deferredFragments}, {data}, and {errors} be the corresponding entries on - {deferredResult}. -- If {data} is {null}: - - Initialize {completed} to an empty list. - - For each {deferredFragment} of {deferredFragments}: - - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - - If {deferState} is not defined, continue to the next {deferredFragment} of - {deferredFragments}. - - Let {futureStates} and {deferStates} be the result of - {RemoveFragment(deferredFragment, deferState, futureStates, deferStates)}. - - Let {completedEntry} be an unordered map containing {errors}. - - Set the entry for {pendingResult} in {completedEntry} to - {deferredFragment}. - - Append {completedEntry} to {completed}. - - Let {update} be an unordered map containing {completed}. - - Return {update}, {newFutures}, and {deferStates}. -- Initialize {pending}, {incremental} and {completed} to empty lists. -- For each {deferredFragment} of {deferredFragments}: - - Let {deferState} be the entry on {deferStates} for {deferredFragment}. - - If {deferState} is not defined, continue to the next {deferredFragment} of - {deferredFragments}. - - Let {newDeferState} be a new unordered map containing all entries on - {deferState}. - - Let {newCompletedFutures} be a new list containing all members of - {completedFutures} on {newDeferState}, appended by {completedFuture}. - - Set the {completedFutures} entry on {newDeferState} to - {newCompletedFutures}. - - Let {pendingFutures} be the corresponding entries on {newDeferState}. - - If the size of {newCompletedFutures} is equal to the size of - {pendingFutures}: - - Let {deferStates}, {fragmentPending}, {fragmentIncremental}, - {fragmentCompleted}, and {fragmentNewFutures} be the result of - {CompleteFragment(deferredFragment, deferState, futureStates, - deferStates)}. - - Append all items in {fragmentPending}, {fragmentIncremental}, and - {fragmentCompleted} to {pending}, {incremental}, and {completed}, - respectively. - - Add all items in {fragmentNewFutures} to {newFutures}. -- Let {update} be an unordered map containing {pending}, {incremental} and - {completed}. -- Return {update}, {newFutures}, and {deferStates}. - -RemoveFragment(deferredFragment, deferState, originalFutureStates, -originalDeferStates): - -- Let {futureStates} and {deferStates} be a new unordered map containing all the - entries in {originalFutureStates} and {originalDeferStates}, respectively. -- Remove the entry for {deferredFragment} on {deferStates}. -- Let {pendingFutures} and {children} be the corresponding entry on - {deferState}. -- For each {future} of {pendingFutures}: - - Let {futureState} be the entry for {future} on {futureStates}. - - Let {newFutureState} be a new unordered map containing all entries in - {futureState}. - - Reset {pendingDeferredFragments} on {newFutureState} to a new set containing - all of the original members except for {deferredFragment}. - - If {pendingDeferredFragments} on {futureState} is empty, remove the entry - for {future} in {futureStates}. - - Otherwise, set the entry for {future} in {futureStates} to {newFutureState}. -- For each {child} of {children}: - - Let {childDeferState} be the entry on {deferStates} for {child}. - - Let {deferStates} be the result of {RemoveFragment(child, childDeferState, - futureStates, deferStates)}. -- Return {futureStates} and {deferStates}. - -CompleteFragment(deferredFragment, deferState, futureStates, -originalDeferStates): - -- Let {deferStates} be a new unordered map containing all entries in - {originalDeferStates}. -- Remove the entry for {deferredFragment} on {deferStates}. -- Let {completedFutures} be the corresponding entry on {deferState}. +CompleteFragment(deferredFragment, resultsForFragment, +pendingFuturesForFragment, newPendingResultsByFragment, resultsByFragment, +unsent): + - Initialize {pending}, {incremental}, and {completed} to empty lists. -- Initialize {newFutures} to the empty set. -- For each {completedFuture} in {completedFutures}: - - Let {futureState} be the entry for {completedFuture} in {futureStates}. - - If {futureState} is not defined, continue to the next {completedFuture} in - {completedFutures}. - - Let {newFutureState} be a new unordered map containing all entries in - {futureState}. - - Set the entry for {completedFuture} in {futureStates} to {newFutureState}. - - Reset {pendingDeferredFragments} on {newFutureState} to a new set containing - all of the original members except for {deferredFragment}. - - If {pendingDeferredFragments} is empty, remove the entry for - {completedFuture} from {futureStates}. - - Let {deferredResult} be the result of {completedFuture}. - - Append {deferredResult} to {incremental}. - - Let {newPendingResults} and {futures} be the corresponding entries on - {deferredResult}. - - Let {resultPending}, {resultNewFutures}, and {deferStates} be the result of - {FilterDefers(newPendingResults, futures, deferStates)}. - - Append all items in {resultPending} to {pending}. - - Add all items in {resultNewFutures} to {newFutures}. -- Append {deferredFragment} to {completed}. -- Let {children} be the corresponding entry on {deferState}. -- Append all items in {children} to {pending}. -- For each {child} of {children}: - - Let {futuresToRelease} and {deferStates} be the result of {ReleaseFragment( - child, deferStates)}. - - Append all items in {futuresToRelease} to {newFutures}. - - Let {childDeferState} be the entry for {child} on {deferStates}. - - Let {pendingFutures} and {completedFutures} be the corresponding entries on - {childDeferState}. - - If the size of {pendingFutures} is equal to the size of {completedFutures}: - - Let {deferStates}, {childPending}, {childIncremental}, {childCompleted}, - and {childNewFutures} be the result of {CompleteFragment(child, - childDeferState, deferStates)}. - - Append all items in {childPending}, {childIncremental}, and - {childCompleted} to {pending}, {incremental}, and {completed}, - respectively. - - Add all items in {childNewFutures} to {newFutures}. -- Return {deferStates}, {pending}, {incremental}, {completed}, and {newFutures}. +- Let {completedEntry} be an empty unordered map. +- Set the entry for {pendingResult} on {completedEntry} to {deferredFragment}. +- Append {completedEntry} to {completed}. +- For each {result} in {resultsForFragment}: + - If {result} is in {unsent}: + - Append {result} to {incremental}. +- Let {newPendingResultsForFragment} be the entry for {deferredFragment} in + {newPendingResultsByFragment}. +- For each {deferredFragment} in {newPendingResultsForFragment}: + - Let {pendingFuturesForFragment} be the entry for {deferredFragment} in + {pendingFuturesByFragment}; if no such list exists, continue to the next + {deferredFragment} in {newPendingResultsForFragment}. + - Append {deferredFragment} to {pending}. + - Let {resultsForFragment} be the entry for {deferredFragment} in + {resultsByFragment}. + - If the size of {resultsForFragment} is equal to the size of + {pendingFuturesForFragment}: + - Let {fragmentPending}, {fragmentIncremental}, and {fragmentCompleted}, be + the result of {CompleteFragment(deferredFragment, resultsForFragment, + pendingFuturesForFragment, newPendingResultsByFragment, resultsByFragment, + unsent)}. + - Append all items in {fragmentPending} to {pending}. + - Append all items in {fragmentIncremental} to {incremental}. + - Append all items in {fragmentCompleted} to {completed}. +- Return {pending}, {incremental}, and {completed}. ## Executing a Field Plan From 19216eb1074b5171f1468bac31dab783c394b1bc Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 02:46:07 +0200 Subject: [PATCH 40/46] minor fixes to the major rewrite have to update the pendingFutures set when completing a fragment --- spec/Section 6 -- Execution.md | 67 +++++++++++++++++++--------------- 1 file changed, 38 insertions(+), 29 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 5ab0f67ac..832b49ff7 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -728,7 +728,7 @@ serial): serial)}. - Initialize {pendingResults}, {pendingFutures}, and {unsent} to the empty set. - Initialize {newPendingResultsByFragment}, {pendingFuturesByFragment}, and - {resultsByFragment} to empty unordered maps. + {completedFuturesByFragment} to empty unordered maps. - Repeat the following steps: - Wait for any futures within {pendingFutures} to complete. - Initialize {pending}, {incremental}, and {completed} to empty lists. @@ -769,25 +769,26 @@ serial): - For each {deferredFragment} in {deferredFragments}: - If {deferredFragment} is not contained by {pendingResults}, continue to the next {deferredFragment} in {deferredFragments}. - - Let {resultsForFragment} be the entry for {deferredFragment} in - {resultsByFragment}; if no such list exists, create it as an empty - list. - - Append {result} to {resultsForFragment}. - - Add {result} to {unsent}. + - Let {completedFuturesForFragment} be the entry for + {deferredFragment} in {completedFuturesByFragment}; if no such list + exists, create it as an empty list. + - Append {future} to {completedFuturesForFragment}. + - Add {future} to {unsent}. - Let {pendingFuturesForFragment} be the entry for {deferredFragment} in {pendingFuturesByFragment}. - - If the size of {resultsForFragment} is equal to the size of + - If the size of {completedFuturesForFragment} is equal to the size of {pendingFuturesForFragment}: - - Let {fragmentPending}, {fragmentIncremental}, and - {fragmentCompleted}, be the result of - {CompleteFragment(deferredFragment, resultsForFragment, + - Let {fragmentPendingFutures}, {fragmentPending}, + {fragmentIncremental}, and {fragmentCompleted}, be the result of + {CompleteFragment(deferredFragment, completedFuturesForFragment, pendingFuturesForFragment, newPendingResultsByFragment, - resultsByFragment, unsent)}. + completedFuturesByFragment, unsent)}. + - Add all items in {fragmentPendingFutures} to {pendingFutures}. - For each {pendingResult} in {fragmentPending}: - Append {pendingResult} to {pending}. - Add {pendingResult} to {pendingResults}. - - For each {result} in {fragmentIncremental}: - - Remove {result} from {unsent}. + - For each {fragmentResult} in {fragmentIncremental}: + - Remove {fragmentResult} from {unsent}. - For each {completedEntry} in {completed}: - Let {pendingResult} be the corresponding entry on {completedEntry}. @@ -859,36 +860,44 @@ serial): {newPendingResults}, and {futures}. - Return {initialResult}. -CompleteFragment(deferredFragment, resultsForFragment, -pendingFuturesForFragment, newPendingResultsByFragment, resultsByFragment, -unsent): +CompleteFragment(deferredFragment, completedFuturesForFragment, +pendingFuturesForFragment, newPendingResultsByFragment, +completedFuturesByFragment, unsent): +- Initialize {pendingFutures} to the empty set. - Initialize {pending}, {incremental}, and {completed} to empty lists. - Let {completedEntry} be an empty unordered map. - Set the entry for {pendingResult} on {completedEntry} to {deferredFragment}. - Append {completedEntry} to {completed}. -- For each {result} in {resultsForFragment}: - - If {result} is in {unsent}: +- For each {future} in {completedFuturesForFragment}: + - If {future} is in {unsent}: + - Let {result} be the result of {future}. - Append {result} to {incremental}. - Let {newPendingResultsForFragment} be the entry for {deferredFragment} in {newPendingResultsByFragment}. - For each {deferredFragment} in {newPendingResultsForFragment}: - - Let {pendingFuturesForFragment} be the entry for {deferredFragment} in - {pendingFuturesByFragment}; if no such list exists, continue to the next + - Let {fragmentPendingFuturesForFragment} be the entry for {deferredFragment} + in {pendingFuturesByFragment}; if no such list exists, continue to the next {deferredFragment} in {newPendingResultsForFragment}. - Append {deferredFragment} to {pending}. - - Let {resultsForFragment} be the entry for {deferredFragment} in - {resultsByFragment}. - - If the size of {resultsForFragment} is equal to the size of - {pendingFuturesForFragment}: - - Let {fragmentPending}, {fragmentIncremental}, and {fragmentCompleted}, be - the result of {CompleteFragment(deferredFragment, resultsForFragment, - pendingFuturesForFragment, newPendingResultsByFragment, resultsByFragment, - unsent)}. + - Let {fragmentCompletedFuturesForFragment} be the entry for + {deferredFragment} in {completedFuturesByFragment}. + - If the size of {fragmentCompletedFuturesForFragment} is equal to the size of + {fragmentPendingFuturesForFragment}: + - Let {fragmentNewFutures}, {fragmentPending}, {fragmentIncremental}, and + {fragmentCompleted}, be the result of {CompleteFragment(deferredFragment, + resultsForFragment, pendingFuturesForFragment, + newPendingResultsByFragment, resultsByFragment, unsent)}. + - Add all items in {fragmentPendingFutures} to {pendingFutures}. - Append all items in {fragmentPending} to {pending}. - Append all items in {fragmentIncremental} to {incremental}. - Append all items in {fragmentCompleted} to {completed}. -- Return {pending}, {incremental}, and {completed}. + - Otherwise: + - For each {future} in {fragmentPendingFuturesForFragment}: + - If {completedFuturesForFragment} does not contain {future}: + - Initiate {future} if it has not yet been initiated. + - Add {future} to {pendingFutures}. +- Return {pendingFutures}, {pending}, {incremental}, and {completed}. ## Executing a Field Plan From 3ffa48ab463bbc880430a987ce91da4840dc0557 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 02:50:23 +0200 Subject: [PATCH 41/46] switch line order --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 832b49ff7..15952ea6b 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -730,8 +730,8 @@ serial): - Initialize {newPendingResultsByFragment}, {pendingFuturesByFragment}, and {completedFuturesByFragment} to empty unordered maps. - Repeat the following steps: - - Wait for any futures within {pendingFutures} to complete. - Initialize {pending}, {incremental}, and {completed} to empty lists. + - Wait for any futures within {pendingFutures} to complete. - Let {completedFutures} be those completed futures. - For each {future} in {completedFutures}: - Remove {future} from {pendingFutures}. From 6c289c4091859a5722e1d7c5c53693acefad4d8e Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 02:55:34 +0200 Subject: [PATCH 42/46] add return line when finished --- spec/Section 6 -- Execution.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 15952ea6b..6887f4185 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -842,7 +842,8 @@ serial): - Let {incrementalResult} be a new unordered map containing {pending}, {incremental}, {completed} and {hasNext}. - Yield {update}. - - If {hasNext} is {false}, complete this incremental result stream. + - If {hasNext} is {false}, complete this incremental result stream and + return. ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial): From 894b9944be74a823f7b492506de305526963734c Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 09:03:05 +0200 Subject: [PATCH 43/46] finish renaming update --- spec/Section 6 -- Execution.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 6887f4185..47f9af591 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -837,11 +837,11 @@ serial): - If {data} is defined: - Let {incrementalResult} be a new unordered map containing {data}, {errors} and {pending}. - - Yield {update}. + - Yield {incrementalResult}. - Otherwise, if {incremental} or {completed} is not empty: - Let {incrementalResult} be a new unordered map containing {pending}, {incremental}, {completed} and {hasNext}. - - Yield {update}. + - Yield {incrementalResult}. - If {hasNext} is {false}, complete this incremental result stream and return. From 3768025497a69ccc2ef4e64cca3cc5add00073ed Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 09:23:23 +0200 Subject: [PATCH 44/46] initialize only after yielding --- spec/Section 6 -- Execution.md | 62 +++++++++++++++++----------------- 1 file changed, 31 insertions(+), 31 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 47f9af591..1691f76bc 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -726,13 +726,29 @@ serial): - Let {initialFuture} be the future result of {ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial)}. -- Initialize {pendingResults}, {pendingFutures}, and {unsent} to the empty set. +- Initiate {initialFuture}. +- Initialize {pendingFutures} to a set containing {initialFuture}. +- Initialize {pendingResults} {pendingFutures}, and {unsent} to the empty set. - Initialize {newPendingResultsByFragment}, {pendingFuturesByFragment}, and {completedFuturesByFragment} to empty unordered maps. - Repeat the following steps: - - Initialize {pending}, {incremental}, and {completed} to empty lists. - - Wait for any futures within {pendingFutures} to complete. - - Let {completedFutures} be those completed futures. + - If none of the futures in {pendingFutures} have completed: + - If {incremental} or {completed} is not empty: + - If {pendingResults} is empty, let {hasNext} be {false}, otherwise let it + be {true}. + - Let {incrementalResult} be a new unordered map containing {pending}, + {incremental}, {completed} and {hasNext}. + - Yield {incrementalResult}. + - If {hasNext} is {false}, complete this incremental result stream and + return. + - For each {future} in {newFutures}: + - Initialize {future} if it has not yet been initialized. + - Add {future} to {pendingFutures}. + - Reset {newFutures} to the empty set. + - Reset {pending}, {incremental}, and {completed} to empty lists. + - Wait for any futures within {pendingFutures} to complete. + - Let {completedFutures} be the futures in {pendingFutures} that have + completed. - For each {future} in {completedFutures}: - Remove {future} from {pendingFutures}. - Let {result} be the result of {future}. @@ -778,12 +794,12 @@ serial): in {pendingFuturesByFragment}. - If the size of {completedFuturesForFragment} is equal to the size of {pendingFuturesForFragment}: - - Let {fragmentPendingFutures}, {fragmentPending}, + - Let {fragmentNewFutures}, {fragmentPending}, {fragmentIncremental}, and {fragmentCompleted}, be the result of {CompleteFragment(deferredFragment, completedFuturesForFragment, pendingFuturesForFragment, newPendingResultsByFragment, completedFuturesByFragment, unsent)}. - - Add all items in {fragmentPendingFutures} to {pendingFutures}. + - Add all items in {fragmentNewFutures} to {newFutures}. - For each {pendingResult} in {fragmentPending}: - Append {pendingResult} to {pending}. - Add {pendingResult} to {pendingResults}. @@ -796,10 +812,9 @@ serial): - For each {result} in {incremental}: - Let {newPendingResults} and {futures} be the corresponding entries on {incremental}. - - For each {future} of {futures}: - - If {future} represents completion of Stream Items: - - Initiate {future} if it has not yet been initiated. - - Add {future} to {pendingFutures}. + - For each {future} of {futures}: If {future} represents completion of + Stream Items: + - Add {future} to {newFutures}. - Otherwise: - Let {deferredFragments} be the Deferred Fragments completed by {future}. @@ -809,8 +824,7 @@ serial): exists, create it as an empty list. - Append {future} to {pendingFuturesForFragment}. - If {deferredFragment} is contained by {pendingResults}: - - Initiate {future} if it has not yet been initiated. - - Add {future} to {pendingFutures}. + - Add {future} to {newFutures}. - For each {newPendingResult} of {newPendingResults}: - If {newPendingResult} represents a Stream: - Append {newPendingResult} to {pending}. @@ -825,25 +839,12 @@ serial): - Append {newPendingResult} to {pending}. - Add {newPendingResult} to {pendingResults}. - For each {future} in {pendingFuturesForFragment}: - - Initiate {future} if it has not yet been initiated. - - Add {future} to {pendingFutures}. + - Add {future} to {newFutures}. - Otherwise: - Let {newPendingResultsForFragment} be the entry for {parent} in {newPendingResultsByFragment}; if no such list exists, create it as an empty list. - Append {newPendingResult} to {newPendingResultsForFragment}. - - If {pendingResults} is empty, let {hasNext} be {false}, otherwise let it - be {true}. - - If {data} is defined: - - Let {incrementalResult} be a new unordered map containing {data}, - {errors} and {pending}. - - Yield {incrementalResult}. - - Otherwise, if {incremental} or {completed} is not empty: - - Let {incrementalResult} be a new unordered map containing {pending}, - {incremental}, {completed} and {hasNext}. - - Yield {incrementalResult}. - - If {hasNext} is {false}, complete this incremental result stream and - return. ExecuteInitialResult(variableValues, initialValue, objectType, selectionSet, serial): @@ -865,7 +866,7 @@ CompleteFragment(deferredFragment, completedFuturesForFragment, pendingFuturesForFragment, newPendingResultsByFragment, completedFuturesByFragment, unsent): -- Initialize {pendingFutures} to the empty set. +- Initialize {newFutures} to the empty set. - Initialize {pending}, {incremental}, and {completed} to empty lists. - Let {completedEntry} be an empty unordered map. - Set the entry for {pendingResult} on {completedEntry} to {deferredFragment}. @@ -889,16 +890,15 @@ completedFuturesByFragment, unsent): {fragmentCompleted}, be the result of {CompleteFragment(deferredFragment, resultsForFragment, pendingFuturesForFragment, newPendingResultsByFragment, resultsByFragment, unsent)}. - - Add all items in {fragmentPendingFutures} to {pendingFutures}. + - Add all items in {fragmentNewFutures} to {newFutures}. - Append all items in {fragmentPending} to {pending}. - Append all items in {fragmentIncremental} to {incremental}. - Append all items in {fragmentCompleted} to {completed}. - Otherwise: - For each {future} in {fragmentPendingFuturesForFragment}: - If {completedFuturesForFragment} does not contain {future}: - - Initiate {future} if it has not yet been initiated. - - Add {future} to {pendingFutures}. -- Return {pendingFutures}, {pending}, {incremental}, and {completed}. + - Add {future} to {newFutures}. +- Return {newFutures}, {pending}, {incremental}, and {completed}. ## Executing a Field Plan From 4fc4dc1bddfd609b425b982ed877ff199bc435e1 Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 09:30:10 +0200 Subject: [PATCH 45/46] fix reset of variable --- spec/Section 6 -- Execution.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index 1691f76bc..acc8cde8a 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -728,7 +728,7 @@ serial): serial)}. - Initiate {initialFuture}. - Initialize {pendingFutures} to a set containing {initialFuture}. -- Initialize {pendingResults} {pendingFutures}, and {unsent} to the empty set. +- Initialize {pendingResults} and {unsent} to the empty set. - Initialize {newPendingResultsByFragment}, {pendingFuturesByFragment}, and {completedFuturesByFragment} to empty unordered maps. - Repeat the following steps: From a966583d8442ab234d44606911c8cdab37c0dc3f Mon Sep 17 00:00:00 2001 From: Yaacov Rydzinski Date: Tue, 30 Jan 2024 09:42:42 +0200 Subject: [PATCH 46/46] fix variable name typo --- spec/Section 6 -- Execution.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/spec/Section 6 -- Execution.md b/spec/Section 6 -- Execution.md index acc8cde8a..5613664c2 100644 --- a/spec/Section 6 -- Execution.md +++ b/spec/Section 6 -- Execution.md @@ -863,7 +863,7 @@ serial): - Return {initialResult}. CompleteFragment(deferredFragment, completedFuturesForFragment, -pendingFuturesForFragment, newPendingResultsByFragment, +pendingFuturesByFragment, newPendingResultsByFragment, completedFuturesByFragment, unsent): - Initialize {newFutures} to the empty set. @@ -888,8 +888,8 @@ completedFuturesByFragment, unsent): {fragmentPendingFuturesForFragment}: - Let {fragmentNewFutures}, {fragmentPending}, {fragmentIncremental}, and {fragmentCompleted}, be the result of {CompleteFragment(deferredFragment, - resultsForFragment, pendingFuturesForFragment, - newPendingResultsByFragment, resultsByFragment, unsent)}. + resultsForFragment, pendingFuturesByFragment, newPendingResultsByFragment, + resultsByFragment, unsent)}. - Add all items in {fragmentNewFutures} to {newFutures}. - Append all items in {fragmentPending} to {pending}. - Append all items in {fragmentIncremental} to {incremental}.