Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unexpected tonic_health error: failed serving connection: connection error #2181

Open
fabian-braun opened this issue Feb 10, 2025 · 0 comments

Comments

@fabian-braun
Copy link

Bug Report

Version

> cargo tree | grep tonic
│   │   │   │   │   └── tonic v0.12.3
│   │   │   │   ├── tonic v0.12.3 (*)
│   │   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   │   └── tonic v0.12.3 (*)
│   ├── tonic v0.12.3 (*)
│   ├── tonic-health v0.12.3
│   │   └── tonic v0.12.3 (*)
│   └── tonic v0.12.3 (*)
├── tonic v0.12.3 (*)
├── tonic-health v0.12.3 (*)

Platform

Linux #16~22.04.1-Ubuntu SMP Mon Aug 19 19:38:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Crates

tonic-health

Description

We're using tonic-health for native kubernetes gRPC health checks (as described here).

Here, the relevant snippet from the deployment descriptor:

Image

On each invocation of the health endpoint in our application, we see an error being logged on DEBUG level:

source: /github/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tonic-0.12.3/src/transport/server/mod.rs
line: 703
message: failed serving connection: connection error

The health check itself is working as expected. We can confirm that the endpoint responds with SERVING / NOT_SERVING as expected. Therefore, I suspect that the problem may be related to how the connection is terminated.

I would expect that no errors are being logged, if the health-check is working properly, not even on DEBUG level, especially with our standard kubernetes setup.

more info

Relevant initialisation code in main.rs:

    let (mut health_reporter, health_server) = tonic_health::server::health_reporter();
    health_reporter
        .set_service_status("", ServingStatus::NotServing)
        .await;
    let addr = "0.0.0.0:9000".parse().unwrap();
    let grpc_server = Server::builder()
        .add_service(health_server)
        .add_service(
            CustomApiServer::new(business_logic_service)
        )
        .serve_with_shutdown(addr, cancellation_token.clone().cancelled_owned())
        .unwrap();
    let grpc_server_handle = tokio::spawn(grpc_server);
    // set serving happens down the line

Kubernetes version: Server Version: v1.30.5

We were wondering if this issue describes our problem, because k8s has a Go-client. But we do specify the host using 0.0.0.0, so it seems that cannot be the cause.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant