-
Notifications
You must be signed in to change notification settings - Fork 449
Home
KestrelRateLimit is an ASP.NET Core rate limiting solution designed to control the rate of requests that clients can make to a Web API or MVC app based on IP address or client ID. The KestrelRateLimit NuGet package contains an IpRateLimitMiddleware and a ClientRateLimitMiddleware, with each middleware you can set multiple limits for different scenarios like allowing an IP or Client to make a maximum number of calls in a time interval like per second, 15 minutes, etc. You can define these limits to address all requests made to an API or you can scope the limits to each API URL or HTTP verb and path.
KestrelRateLimit targets .NET Framework 4.6 and .NET Standard 1.6. The package has the following dependencies: Microsoft.AspNetCore.Mvc 1.0 and NETStandard.Library 1.6.
NuGet install:
Install-Package KestrelRateLimit
Startup.cs code:
public void ConfigureServices(IServiceCollection services)
{
// needed to load configuration from appsettings.json
services.AddOptions();
// needed to store rate limit counters and ip rules
services.AddMemoryCache();
//load general configuration from appsettings.json
services.Configure<IpRateLimitOptions>(Configuration.GetSection("IpRateLimiting"));
//load ip rules from appsettings.json
services.Configure<IpRateLimitPolicies>(Configuration.GetSection("IpRateLimitPolicies"));
// inject counter and rules stores
services.AddSingleton<IIpPolicyStore, MemoryCacheIpPolicyStore>();
services.AddSingleton<IRateLimitCounterStore, MemoryCacheRateLimitCounterStore>();
// Add framework services.
services.AddMvc();
}
public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
{
loggerFactory.AddConsole(Configuration.GetSection("Logging"));
loggerFactory.AddDebug();
app.UseIpRateLimiting();
app.UseMvc();
}
You should register the middleware before any other components except loggerFactory.
if you load balance your app you'll need to use IDistributedCache
with Redis or SQLServer so that all kestrel instances will have the same rate limit store.
Instead of the in memory stores you should inject the distributed stores like this:
// inject counter and rules distributed cache stores
services.AddSingleton<IIpPolicyStore, DistributedCacheIpPolicyStore>();
services.AddSingleton<IRateLimitCounterStore,DistributedCacheRateLimitCounterStore>();
Configuration and general rules appsettings.json:
"IpRateLimiting": {
"EnableEndpointRateLimiting": false,
"StackBlockedRequests": false,
"RealIpHeader": "X-Real-IP",
"ClientIdHeader": "X-ClientId",
"HttpStatusCode": 429,
"IpWhitelist": [ "127.0.0.1", "::1/10", "192.168.0.0/24" ],
"EndpointWhitelist": [ "get:/api/license", "*:/api/status" ],
"ClientWhitelist": [ "dev-id-1", "dev-id-2" ],
"GeneralRules": [
{
"Endpoint": "*",
"Period": "1s",
"Limit": 2
},
{
"Endpoint": "*",
"Period": "15m",
"Limit": 100
},
{
"Endpoint": "*",
"Period": "12h",
"Limit": 1000
},
{
"Endpoint": "*",
"Period": "7d",
"Limit": 10000
}
]
}
If EnableEndpointRateLimiting
is set to false
then the limits will apply globally and only rules that have as endpoint *
will apply. For example if you set a limit of 5 calls per second, any HTTP call to any endpoint will count towards that limit.
If EnableEndpointRateLimiting
is set to true
then the limits will apply for each endpoint as in {HTTP_Verb}{PATH}
. For example if you set a limit of 5 calls per second for *:/api/values
a client can call GET /api/values
5 times per second but also 5 times PUT /api/values
.
If StackBlockedRequests
is set to false
rejected calls are not added to the throttle counter. If a client makes 3 requests per second and you've set a limit of one call per second, other limits like per minute or per day counters will only record the first call, the one that wasn't blocked. If you want rejected requests to count towards the other limits, you'll have to set StackBlockedRequests
to true
.
The RealIpHeader
is used to extract the client IP when your Kestrel server is behind a reverse proxy, if your proxy uses a different header then X-Real-IP
use this option to set it up.
The ClientIdHeader
is used to extract the client id for white listing, if a client id is present in this header and matches a value specified in ClientWhitelist then no rate limits are applied.
Override general rules for specific IPs appsettings.json:
"IpRateLimitPolicies": {
"IpRules": [
{
"Ip": "84.247.85.224",
"Rules": [
{
"Endpoint": "*",
"Period": "1s",
"Limit": 10
},
{
"Endpoint": "*",
"Period": "15m",
"Limit": 200
}
]
},
{
"Ip": "192.168.3.22/25",
"Rules": [
{
"Endpoint": "*",
"Period": "1s",
"Limit": 5
},
{
"Endpoint": "*",
"Period": "15m",
"Limit": 150
},
{
"Endpoint": "*",
"Period": "12h",
"Limit": 500
}
]
}
]
}
The IP field supports IP v4 and v6 values and ranges like "192.168.0.0/24", "fe80::/10" or "192.168.0.0-192.168.0.255".
A rule is composed of an endpoint, a period and a limit.
Endpoint format is {HTTP_Verb}:{PATH}
, you can target any HTTP verb by using the asterix symbol.
Period format is {INT}{PERIOD_TYPE}
, you can use one of the following period types: s, m, h, d
.
Limit format is {LONG}
.
Examples:
Rate limit all endpoints to 2 calls per second:
{
"Endpoint": "*",
"Period": "1s",
"Limit": 2
}
If, from the same IP, in the same second, you'll make 3 GET calls to api/values, the last call will get blocked. But if in the same second you call PUT api/values too, the request will go through because it's a different endpoint. When endpoint rate limiting is enabled each call is rate limited based on {HTTP_Verb}{PATH}
.
Rate limit calls with any HTTP Verb to /api/values
to 5 calls per 15 minutes:
{
"Endpoint": "*:/api/values",
"Period": "15m",
"Limit": 5
}
Rate limit GET call to /api/values
to 5 calls per hour:
{
"Endpoint": "get:/api/values",
"Period": "1h",
"Limit": 5
}
If, from the same IP, in one hour, you'll make 6 GET calls to api/values, the last call will get blocked. But if in the same hour you call GET api/values/1 too, the request will go through because it's a different endpoint.