Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add full-stack rama benchmarks #374

Open
GlenDC opened this issue Dec 30, 2024 · 6 comments
Open

add full-stack rama benchmarks #374

GlenDC opened this issue Dec 30, 2024 · 6 comments
Assignees
Labels
easy An easy issue to pick up for anyone. mentor available A mentor is available to help you through the issue.
Milestone

Comments

@GlenDC
Copy link
Member

GlenDC commented Dec 30, 2024

Within https://github.com/plabayo/rama/tree/main/benches it would be good to have a couple of benchmarks, also driven by divan, for:

h1 (over tcp):

  • client <-> server
  • client <-> proxy <-> server

h2 (over tcp):

  • client <-> server
  • client <-> proxy <-> server

Use a realistic amount of layers as well, to make sure we get a good idea full stack.

This probably would be enough to resolve #340,
as it would allow us to track overtime how we are doing performance and allocation wise.

Bonus points if you can figure out how to get some nice summary tables out of these benchmarks so we can have them visible somewhere, but even better if we can make it part of the CI + PR check. This is however to be done perhaps as a separate ticket/PR, so first focus on just having the missing e2e benchmarks.

@GlenDC GlenDC added this to the v0.2 milestone Dec 30, 2024
@GlenDC GlenDC added easy An easy issue to pick up for anyone. mentor available A mentor is available to help you through the issue. labels Dec 30, 2024
@ASamedWalker
Copy link

Hi,
I am very interested in contributing to this issue. Although I am new to Rust and this would be my first time contributing to open-source, I am eager to learn and would greatly appreciate any guidance to help me address this task. While I will do my best to make progress promptly, I may require additional time and support to ensure the solution meets the project’s standards. Thanks

@GlenDC
Copy link
Member Author

GlenDC commented Jan 4, 2025

It's not a particularly difficult task @ASamedWalker but rama usage does require one being comfortable with Rust.

Before you decide to commit, can you look through the /examples and tell me if you can read that code and understand what's going on. As that's one of the pre requirements to be able to do this ticket.

If you have questions about it feel free to shoot me an email or join our discord.

@AnkurRathore
Copy link
Contributor

@GlenDC sure I can pick this one up. This means we need to add additional benchmark tests aprat from the h2 ones. How should that one be different from the existing ones and another particular setup required?

@GlenDC
Copy link
Member Author

GlenDC commented Feb 9, 2025

These would be additional files in https://github.com/plabayo/rama/tree/main/benches written using divan as the benchmarker skeleton/framework.

These are different in that they benchmark e2e, the other benchmarks in there are about specific units, while these new ones benchmark entirely end to end.

That's why I suggested to have these tests

  • ./benches/e2e_h1_client_server.rs
  • ./benches/e2e_h1_tls_client_server.rs
  • ./benches/e2e_h1_client_proxy_server.rs
  • ./benches/e2e_h1_tls_client_transport_proxy_server.rs
  • ./benches/e2e_h1_tls_client_mitm_proxy_server.rs
  • ./benches/e2e_h2_client_server.rs
  • ./benches/e2e_h2_tls_client_server.rs
  • ./benches/e2e_h2_client_proxy_server.rs
  • ./benches/e2e_h2_tls_client_transport_proxy_server.rs
  • ./benches/e2e_h2_tls_client_mitm_proxy_server.rs
  • ./benches/e2e_auto_client_server.rs
  • ./benches/e2e_auto_tls_client_server.rs
  • ./benches/e2e_auto_client_proxy_server.rs
  • ./benches/e2e_auto_tls_client_transport_proxy_server.rs
  • ./benches/e2e_auto_tls_client_mitm_proxy_server.rs

You can look at ./examples code on how to setup clients, servers and proxies.

Given there will be a lot of similar code in between these different files feel free to
write utilities for these benches such that there's barely any code repetition.

If you do this correctly it should not be too much code, yet give is a pretty good set of numbers.

Each file should have at least two benchmark functions: one for a small payload and one for a large payload.

@AnkurRathore
Copy link
Contributor

@GlenDC I did run some of the examples to have a better understanding. What I am not able to figure out is in the tests what does h1, h2 and auto mean. Suppose I want to write a bench mark test for e2e_h1_client_server, would this mean I have to do the following:

  1. Implement benchmark scenarios for single and concurrent requests
  2. Run the benchmark and observe for
    a. Server startup and shutdown
    b. Single request latency
    c. Concurrent requests handling
    d. Throughput measurement.

If so, then I think the steps will be common for the mentioned benchmark tests, only the type of connection or service will differ.

@GlenDC
Copy link
Member Author

GlenDC commented Feb 16, 2025

@AnkurRathore

  • divan will AFAIK be always sequential, not concurrent. For a single scenario that is. But that's okay, let's start simple and just use a regular async fn with divan
  • You would not have to do all those stuff no. What you essentially would need to do is make a client, server and in case it's a proxy test also a proxy. And use just send requests through it with random data

It's pretty simple, but I would start with that.

In case it is a bit too abstract / vague for you I could push a solution for 1 scenario to the main branch, with instructions on how one can expand it for more? Up to you how much help you need with this. Or feel free to ask more questions or make a quick draft PR to see if you understand it. All is possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
easy An easy issue to pick up for anyone. mentor available A mentor is available to help you through the issue.
Projects
None yet
Development

No branches or pull requests

3 participants