Skip to content

Lower level API with manual flow control support #763

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
glbrntt opened this issue Mar 24, 2020 · 1 comment
Closed

Lower level API with manual flow control support #763

glbrntt opened this issue Mar 24, 2020 · 1 comment
Labels
kind/enhancement Improvements to existing feature. version/v1 Relates to v1

Comments

@glbrntt
Copy link
Collaborator

glbrntt commented Mar 24, 2020

Lower Level API

gRPC Swift would benefit from having a lower-level API where users have finer
grained control over the RPC such as manual flow control and where code is
executed (currently the API restricts user-code to be executed on the event
loop).

On the server, services are provided by classes conforming to
CallHandlerProvider; importantly this includes routing from a method name to
a GRPCCallHandler which is a refinement of NIO's ChannelHandler. This gives
us plenty of scope to decide on the API at a later date without worrying about
making breaking changes to the API.

On the client, generated stubs rely on a GRPCChannel as a means to make an
RPC. Currently it has one method per RPC type. This would need to be extended to
provide a lower-level API and as such would be a breaking API change. However,
since this protocol isn't intended for users to implement (the returned types
from the RPC factory methods do not have public initializers), extending this
protocol with a default implementation of any new methods should be okay so as
to not break API.

What could a lower-level API look like?

Generally speaking gRPC servers operate on a request stream to provide input
for a response stream. User code would be provided a means to write responses
back to the client and return an object which reacts to request messages from
the client.

server_rpc(outbound_writer) -> inbound_handler

Clients stream requests and receive a stream of responses; client stubs could
accept a means to operate on the response stream and return something
which allows the caller to send messages on the request stream.

client_rpc(inbound_handler) -> outbound_writer

To support manual flow control, the outbound streams must expose a way of
indicating whether messages may be sent without buffering and the inbound
streams must expose a way of requesting more messages. User implemented code
with access to both streams therefore acts as an intermediary between the two
streams.

An API for the client could look something like the following.

To observe response messages sent from the server the user would implement a
protocol such as:

public protocol ResponseObserver {
  /// The response message type for the RPC.
  associatedtype Response: GRPCPayload

  /// Receive the initial metadata from the server. If the server fast-fails the RPC (with
  /// as a "Status-Only" RPC) then the metadata will be provided in `receive(trailingMetadata:)`,
  /// and **not** `receive(initialMetadata:)`. Called at most once.
  func receive(initialMetadata metadata: HPACKHeaders)

  /// Receive a response message from the server. For unary and client streaming calls this will
  /// be called at most once. For server streaming and bidirectional calls this may be called any
  /// number of times.
  func receive(response message: Response)

  /// Receive the trailing metadata from the server. If the server fast-fails the RPC (with
  /// as a "Status-Only" RPC) then the metadata will be provided in `receive(trailingMetadata:)`,
  /// and **not** `receive(initialMetadata:)`. Called at most once.
  func receive(trailingMetadata metadata: HPACKHeaders)

  /// Receive the status of the RPC. Called exactly once to indicate that the RPC has completed.
  /// No other `receive()` methods will be called after this method.
  func receive(status: GRPCStatus)
}

To send requests to the server the caller of the RPC would be provided with an
implementation of:

public protocol RequestWriter {
  /// The request message type for the RPC.
  associatedtype Request: GRPCPayload

  /// Send a request message to the server. Unary and server streaming RPCs should send
  /// at most one message. Subsequent messages will be dropped.
  func send(request message: Request)

  /// Close the request stream. Must be called exactly once.
  func sendEnd()

  /// Cancels the RPC.
  func cancel()
}

However, as noted above we also need to espose a way for the caller to know
when they may write without buffering, and a way to request more responses from
the server.

public protocol RequestWriter {
  // ... as above

  /// Returns true when messages may be written without being buffered.
  var isReady: Bool { get }

  /// Register a callback that is executed when `isReady` becomes true. Note
  /// that `isReady` may become `false` while the callback is executing.
  func whenReady(_ callback: () -> ())

  /// Requests that the at most `count` additional responses are delivered to the 
  /// response observer.
  func request(responses count: Int)
}

(Note that the user would not be responsible for implementing RequestWriter,
the protocol is for illustrtation.)

The API could be standardised for all four RPC types, leaving the addition to
the GRPCChannel as something like:

protocol GRPCChannel {
  // ...

  /// Make an RPC:
  ///
  /// - Parameter path: The path of the RPC, e.g. '/echo.Echo/Get'.
  /// - Parameter callOptions: Any call options associated with the RPC.
  /// - Parameter factory: A factory which returns a response observer given a
  ///     request writer.
  func makeRPC<Request: GRPCPayload, Observer: ResponseObserver>(
    path: String,
    callType: GRPCCallType,
    callOptions: CallOptions,
    factory: (RequestWriter<Request>) -> Observer
  ) -> (RequestWriter<Request>, Observer)
}

Making an RPC using the lower-level API requires a factory which is provided with
a request writer and returns an observer. This allows the user provided observer to
hold an appropriate request stream, allowing them to request more responses and
determine when they should write.

Users wouldn't usually call the makeRPC function directly, instead they would call
it via a generated stub:

class SomeGeneratedClient: GRPCClient {
  let channel: GRPCChannel
  var defaultCallOptions: CallOptions

  // ...

  func aUnaryRPC<Observer: ResponseObserver>(
    callOptions: CallOptions? = nil,
    _ factory: (RequestWriter<UnaryRPCRequest> -> Observer)
  ) where Observer.Response == UnaryRPCResponse -> (RequestWriter<UnaryRPCRequest>, Observer) {
    return self.channel.makeRPC(
      path: "/example/UnaryRPC/",
      callType: .unary,
      callOptions: callOptions ?? self.defaultCallOptions,
      factory: factory
    )
  }
}
@glbrntt glbrntt added kind/enhancement Improvements to existing feature. nio labels Mar 24, 2020
@glbrntt glbrntt removed the nio label Jul 6, 2020
@glbrntt glbrntt added the version/v1 Relates to v1 label Apr 30, 2025
@glbrntt
Copy link
Collaborator Author

glbrntt commented Apr 30, 2025

We're no longer actively developing features for v1 so this is unlikely to ever happen.

@glbrntt glbrntt closed this as not planned Won't fix, can't repro, duplicate, stale Apr 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement Improvements to existing feature. version/v1 Relates to v1
Projects
None yet
Development

No branches or pull requests

1 participant