node-rate-limiter-flexible
Count and limit requests by key with atomic increments in single process or...
README
[node-image]: https://img.shields.io/badge/node.js-%3E=_6.0-green.svg?style=flat-square
[node-url]: http://nodejs.org/download/
node-rate-limiter-flexible
rate-limiter-flexible counts and limits number of actions by key and protects from DDoS and brute force attacks at any scale.
It works with _Redis_, process _Memory_, _Cluster_ or _PM2_, _Memcached_, _MongoDB_, _MySQL_, _PostgreSQL_ and allows to control requests rate in single process or distributed environment.
Memory limiter also works in browser.
Atomic increments. All operations in memory or distributed environment use atomic increments against race conditions.
Allow traffic bursts with BurstyRateLimiter.
Flexible. Combine limiters, block key for some duration, delay actions, manage failover with insurance options, configure smart key blocking in memory and many others.
Ready for growth. It provides unified API for all limiters. Whenever your application grows, it is ready. Prepare your limiters in minutes.
Friendly. No matter which node package you prefer: redis or ioredis, sequelize/typeorm or knex, memcached, native driver or mongoose. It works with all of them.
In memory blocks. Avoid extra requests to store with inMemoryBlockOnConsumed.
Deno compatible See this example
It uses fixed window as it is much faster than rolling window.
Installation
npm i --save rate-limiter-flexible
yarn add rate-limiter-flexible
Basic Example
Points can be consumed by IP address, user ID, authorisation token, API route or any other string.
- ``` js
- const opts = {
- points: 6, // 6 points
- duration: 1, // Per second
- };
- const rateLimiter = new RateLimiterMemory(opts);
- rateLimiter.consume(remoteAddress, 2) // consume 2 points
- .then((rateLimiterRes) => {
- // 2 points consumed
- })
- .catch((rateLimiterRes) => {
- // Not enough points to consume
- });
- ```
RateLimiterRes object
Both Promise resolve and reject return object of RateLimiterRes class if there is no any error.
Object attributes:
- ``` js
- RateLimiterRes = {
- msBeforeNext: 250, // Number of milliseconds before next action can be done
- remainingPoints: 0, // Number of remaining points in current duration
- consumedPoints: 5, // Number of consumed points in current duration
- isFirstInDuration: false, // action is first in current duration
- }
- ```
You may want to set next HTTP headers to response:
- ``` js
- const headers = {
- "Retry-After": rateLimiterRes.msBeforeNext / 1000,
- "X-RateLimit-Limit": opts.points,
- "X-RateLimit-Remaining": rateLimiterRes.remainingPoints,
- "X-RateLimit-Reset": new Date(Date.now() + rateLimiterRes.msBeforeNext)
- }
- ```
Advantages:
no race conditions
no production dependencies
TypeScript declaration bundled
allow traffic burst with BurstyRateLimiter
Block Strategy against really powerful DDoS attacks (like 100k requests per sec) Read about it and benchmarking here
Insurance Strategy as emergency solution if database / store is down Read about Insurance Strategy here
works in Cluster or PM2 without additional software See RateLimiterCluster benchmark and detailed description here
useful get, set, block, delete, penalty and reward methods
Middlewares, plugins and other packages
GraphQL graphql-rate-limit-directive
NestJS try nestjs-rate-limiter
Fastify based NestJS app try nestjs-fastify-rate-limiter
Some copy/paste examples on Wiki:
Migration from other packages
express-brute Bonus: race conditions fixed, prod deps removed
limiter Bonus: multi-server support, respects queue order, native promises
Docs and Examples
BurstyRateLimiter Traffic burst support
RateLimiterMongo (with sharding support)
RateLimiterMySQL (support Sequelize and Knex)
RateLimiterPostgres (support Sequelize, TypeORM and Knex)
RateLimiterUnion Combine 2 or more limiters to act as single
RLWrapperBlackAndWhite Black and White lists
RateLimiterQueue Rate limiter with FIFO queue
Changelog
See releases for detailed changelog.
Basic Options
points
Default: 4
Maximum number of points can be consumed over duration
duration
Default: 1
Number of seconds before consumed points are reset.
Never reset points, if duration is set to 0.
storeClient
Required for store limiters
Have to be redis, ioredis, memcached, mongodb, pg, mysql2, mysql or any other related pool or connection.
Other options on Wiki:
keyPrefix Make keys unique among different limiters.
blockDuration Block for N seconds, if consumed more than points.
inMemoryBlockOnConsumed Avoid extra requests to store.
insuranceLimiter Make it more stable with less efforts.
storeType Have to be set toknex, if you use it.
dbName Where to store points.
tableName Table/collection.
tableCreated Is table already created in MySQL or PostgreSQL.
clearExpiredByTimeout For MySQL and PostgreSQL.
Smooth out traffic picks:
Specific:
indexKeyPrefix Combined indexes of MongoDB.
timeoutMs For Cluster.
API
Read detailed description on Wiki.
consume(key, points = 1) Consume points by key.
set(key, points, secDuration) Set points by key.
block(key, secDuration) Block key forsecDuration seconds.
delete(key) Reset consumed points.
penalty(key, points = 1) Increase number of consumed points in current duration.
reward(key, points = 1) Decrease number of consumed points in current duration.
getKey(key) Get internal prefixed key.
Benchmark
Average latency during test pure NodeJS endpoint in cluster of 4 workers with everything set up on one server.
1000 concurrent clients with maximum 2000 requests per sec during 30 seconds.
- ```text
- 1. Memory 0.34 ms
- 2. Cluster 0.69 ms
- 3. Redis 2.45 ms
- 4. Memcached 3.89 ms
- 5. Mongo 4.75 ms
- ```
500 concurrent clients with maximum 1000 req per sec during 30 seconds
- ```text
- 6. PostgreSQL 7.48 ms (with connection pool max 100)
- 7. MySQL 14.59 ms (with connection pool 100)
- ```
Note, you can speed up limiters with inMemoryBlockOnConsumed option.
Contribution
Appreciated, feel free!
Make sure you've launched npm run eslint before creating PR, all errors have to be fixed.
You can try to run npm run eslint-fix to fix some issues.
Any new limiter with storage have to be extended from RateLimiterStoreAbstract.
It has to implement 4 methods:
_getRateLimiterRes parses raw data from store to RateLimiterRes object.
_upsert must be atomic. it inserts or updates value by key and returns raw data. it must support forceExpire mode
to overwrite key expiration time.
_get returns raw data by key or null if there is no key.
_delete deletes all key related data and returns true on deleted, false if key is not found.
All other methods depends on store. See RateLimiterRedis or RateLimiterPostgres for example.
Note: all changes should be covered by tests.