Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Overall example

Roman edited this pageJun 24, 2024 ·60 revisions

How to protect

All examples are written for ExpressJS and Redis store, but the same idea can be applied for all limiters with any Koa, Hapi, Nest, pure NodeJS application, etc.

Create rate limiter and consume points on every request

Any store limiter like Mongo, MySQL, etc can be used for distributed environment as well.

constexpress=require('express');constRedis=require('ioredis');constredisClient=newRedis({enableOfflineQueue:false});constapp=express();constrateLimiterRedis=newRateLimiterRedis({storeClient:redisClient,points:10,// Number of pointsduration:1,// Per second});constrateLimiterMiddleware=(req,res,next)=>{rateLimiterRedis.consume(req.ip).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});};app.use(rateLimiterMiddleware);

Rate limiter consumes 1 point by IP for every request to an application. This limits a user to make only 10 requests per second. It works in distributed environments as it stores all limits on Redis.

Memory limiter can be used if application is launched as single process.

Cluster limiter is available for application launched on single server.

Minimal protection against password brute-force

Disallow too many wrong password tries. Block user account for some period of time on limit reached.

The idea is simple:

  1. get number of wrong tries and block request if limit reached.
  2. if correct password, reset wrong password tries count.
  3. if wrong password, count = count + 1.
consthttp=require('http');constexpress=require('express');constRedis=require('ioredis');const{ RateLimiterRedis}=require('rate-limiter-flexible');// You may also use Mongo, Memory or any other limiter typeconstredisClient=newRedis({enableOfflineQueue:false});constmaxConsecutiveFailsByUsername=5;constlimiterConsecutiveFailsByUsername=newRateLimiterRedis({redis:redisClient,keyPrefix:'login_fail_consecutive_username',points:maxConsecutiveFailsByUsername,duration:60*60*3,// Store number for three hours since first failblockDuration:60*15,// Block for 15 minutes});asyncfunctionloginRoute(req,res){constusername=req.body.email;constrlResUsername=awaitlimiterConsecutiveFailsByUsername.get(username);if(rlResUsername!==null&&rlResUsername.consumedPoints>maxConsecutiveFailsByUsername){constretrySecs=Math.round(rlResUsername.msBeforeNext/1000)||1;res.set('Retry-After',String(retrySecs));res.status(429).send('Too Many Requests');}else{constuser=authorise(username,req.body.password);// should be implemented in your projectif(!user.isLoggedIn){try{awaitlimiterConsecutiveFailsByUsername.consume(username);res.status(400).end('email or password is wrong');}catch(rlRejected){if(rlRejectedinstanceofError){throwrlRejected;}else{res.set('Retry-After',String(Math.round(rlRejected.msBeforeNext/1000))||1);res.status(429).send('Too Many Requests');}}}if(user.isLoggedIn){if(rlResUsername!==null&&rlResUsername.consumedPoints>0){// Reset on successful authorisationawaitlimiterConsecutiveFailsByUsername.delete(username);}res.end('authorised');}}}constapp=express();app.post('/login',async(req,res)=>{try{awaitloginRoute(req,res);}catch(err){res.status(500).end();}});

Note, this approach may be an issue for your users, if somebody knows your service applies it. It can be scheduled to send 5 password tries every 15 minutes and block user account for infinity. It should not be a problem for MVP or early stages of a startup.

If you wish to avoid that possible issue, you may:

  1. Implement trusted device approach additionally. Save some token on the client after successful authorisation and check for exact username before limiting against brute-force.
  2. Apply limiting by IP in short and long period of time like inthis example.
  3. ApplyLogin endpoint protection approach from the below example.

Login endpoint protection

It should be protected against brute force attacks. Additionally, it should be rate limited, if rate limits are not set on reverse-proxy or load balancer. This example describes one possible way to protect against brute-force and does not include global rate limiting.

Create 2 limiters.The first counts number of consecutive failed attempts and allows maximum 10 by username and IP pair.The second blocks IP for a day on 100 failed attempts per day.

consthttp=require('http');constexpress=require('express');constRedis=require('ioredis');const{ RateLimiterRedis}=require('rate-limiter-flexible');constredisClient=newRedis({enableOfflineQueue:false});constmaxWrongAttemptsByIPperDay=100;constmaxConsecutiveFailsByUsernameAndIP=10;constlimiterSlowBruteByIP=newRateLimiterRedis({storeClient:redisClient,keyPrefix:'login_fail_ip_per_day',points:maxWrongAttemptsByIPperDay,duration:60*60*24,blockDuration:60*60*24,// Block for 1 day, if 100 wrong attempts per day});constlimiterConsecutiveFailsByUsernameAndIP=newRateLimiterRedis({storeClient:redisClient,keyPrefix:'login_fail_consecutive_username_and_ip',points:maxConsecutiveFailsByUsernameAndIP,duration:60*60*24*90,// Store number for 90 days since first failblockDuration:60*60,// Block for 1 hour});constgetUsernameIPkey=(username,ip)=>`${username}_${ip}`;asyncfunctionloginRoute(req,res){constipAddr=req.ip;constusernameIPkey=getUsernameIPkey(req.body.email,ipAddr);const[resUsernameAndIP,resSlowByIP]=awaitPromise.all([limiterConsecutiveFailsByUsernameAndIP.get(usernameIPkey),limiterSlowBruteByIP.get(ipAddr),]);letretrySecs=0;// Check if IP or Username + IP is already blockedif(resSlowByIP!==null&&resSlowByIP.consumedPoints>maxWrongAttemptsByIPperDay){retrySecs=Math.round(resSlowByIP.msBeforeNext/1000)||1;}elseif(resUsernameAndIP!==null&&resUsernameAndIP.consumedPoints>maxConsecutiveFailsByUsernameAndIP){retrySecs=Math.round(resUsernameAndIP.msBeforeNext/1000)||1;}if(retrySecs>0){res.set('Retry-After',String(retrySecs));res.status(429).send('Too Many Requests');}else{constuser=authorise(req.body.email,req.body.password);// should be implemented in your projectif(!user.isLoggedIn){// Consume 1 point from limiters on wrong attempt and block if limits reachedtry{constpromises=[limiterSlowBruteByIP.consume(ipAddr)];if(user.exists){// Count failed attempts by Username + IP only for registered userspromises.push(limiterConsecutiveFailsByUsernameAndIP.consume(usernameIPkey));}awaitPromise.all(promises);res.status(400).end('email or password is wrong');}catch(rlRejected){if(rlRejectedinstanceofError){throwrlRejected;}else{res.set('Retry-After',String(Math.round(rlRejected.msBeforeNext/1000))||1);res.status(429).send('Too Many Requests');}}}if(user.isLoggedIn){if(resUsernameAndIP!==null&&resUsernameAndIP.consumedPoints>0){// Reset on successful authorisationawaitlimiterConsecutiveFailsByUsernameAndIP.delete(usernameIPkey);}res.end('authorized');}}}constapp=express();app.post('/login',async(req,res)=>{try{awaitloginRoute(req,res);}catch(err){res.status(500).end();}});

Nest.js gist example.

The example can be simplified if replace twoget requests in the beginning to twoconsume calls, but there are concerns. First, consume calls are more expensive. Imagine, somebody DDoSes the login endpoint and a database got millions of upsert requests. Second, if there is a consume call by random username allowed, it can overflow the storage with junk keys.

See more examples of login endpoint protection in"Brute-force protection Node.js examples" article

Websocket single connection prevent flooding

The most simple is rate limiting by IP.

constapp=require('http').createServer();constio=require('socket.io')(app);const{ RateLimiterMemory}=require('rate-limiter-flexible');app.listen(3000);constrateLimiter=newRateLimiterMemory({points:5,// 5 pointsduration:1,// per second});io.on('connection',(socket)=>{socket.on('bcast',async(data)=>{try{awaitrateLimiter.consume(socket.handshake.address);// consume 1 point per event from IPsocket.emit('news',{'data':data});socket.broadcast.emit('news',{'data':data});}catch(rejRes){// no available points to consume// emit error or warning messagesocket.emit('blocked',{'retry-ms':rejRes.msBeforeNext});}});});

It may be issue if there are many users behind one IP address. If there is some login procedure oruniqueUserId, use it to limit on per user basis. Otherwise, you may try to limit bysocket.id and limit number of allowed re-connections from the same IP.

If websocket server is launched ascluster orPM2, you should useRateLimiterCluster orRateLimiterCluster with PM2.

Cluster and PM2 limiter is also enough if you usesticky load balancing. However, if cluster master process is restarted, all counters are reset.

ConsiderRateLimiterRedis or any other store limiter for multiple websocket server nodes.

Dynamic block duration

Well known authorisation protection technique is increasing block duration on consecutive failed attempts.

Here is the logic:

  1. maximum 5 fails per 15 minutes. Consume one point on failed login attempt.
  2. if there are no remaining points, increment a counter N for a user who failed.
  3. block authorisation for the user during some period of time depending on N.
  4. clear counter N on successful login.
constIoredis=require('ioredis');const{ RateLimiterRedis}=require('rate-limiter-flexible');constredisClient=newIoredis({});constloginLimiter=newRateLimiterRedis({storeClient:redisClient,keyPrefix:'login',points:5,// 5 attemptsduration:15*60,// within 15 minutes});constlimiterConsecutiveOutOfLimits=newRateLimiterRedis({storeClient:redisClient,keyPrefix:'login_consecutive_outoflimits',points:99999,// doesn't matter much, this is just counterduration:0,// never expire});functiongetFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits){if(countConsecutiveOutOfLimits<=1){return1;}returngetFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits-1)+getFibonacciBlockDurationMinutes(countConsecutiveOutOfLimits-2);}asyncfunctionloginRoute(req,res){constuserId=req.body.email;constresById=awaitloginLimiter.get(userId);letretrySecs=0;if(resById!==null&&resById.remainingPoints<=0){retrySecs=Math.round(resById.msBeforeNext/1000)||1;}if(retrySecs>0){res.set('Retry-After',String(retrySecs));res.status(429).send('Too Many Requests');}else{constuser=authorise(req.body.email,req.body.password);// should be implemented in your projectif(!user.isLoggedIn){if(user.exists){try{constresConsume=awaitloginLimiter.consume(userId);if(resConsume.remainingPoints<=0){constresPenalty=awaitlimiterConsecutiveOutOfLimits.penalty(userId);awaitloginLimiter.block(userId,60*getFibonacciBlockDurationMinutes(resPenalty.consumedPoints));}}catch(rlRejected){if(rlRejectedinstanceofError){throwrlRejected;}else{res.set('Retry-After',String(Math.round(rlRejected.msBeforeNext/1000))||1);res.status(429).send('Too Many Requests');}}}res.status(400).end('email or password is wrong');}if(user.isLoggedIn){awaitlimiterConsecutiveOutOfLimits.delete(userId);res.end('authorized');}}}

Note, this example may be not a good fit. If a hacker makes attack on user's account by email, real user should have a way to prove, that he is real. Also, see more flexibleexample of login protection here.

Authorized and not authorized users

Sometimes it is reasonable to make the difference between authorized and not authorized requests. For example, an application must provide public access as well as serve for registered and authorized users with different limits.

constexpress=require('express');constRedis=require('ioredis');constredisClient=newRedis({enableOfflineQueue:false});constapp=express();constrateLimiterRedis=newRateLimiterRedis({storeClient:redisClient,points:300,// Number of pointsduration:60,// Per 60 seconds});// req.userId should be set by someAuthMiddleware. It is up to you, how to do thatapp.use(someAuthMiddleware);constrateLimiterMiddleware=(req,res,next)=>{// req.userId should be setconstkey=req.userId ?req.userId :req.ip;constpointsToConsume=req.userId ?1 :30;rateLimiterRedis.consume(key,pointsToConsume).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});};app.use(rateLimiterMiddleware);

This example not ideally clean, because in some weird casesuserId may be equal toremoteAddress. Make sure this never happens.

It consumes 30 points for every not authorized request or 1 point, if application recognises a user by ID.

Different limits for different parts of application

This can be achieved by creating of independent limiters.

constexpress=require('express');constRedis=require('ioredis');constredisClient=newRedis({enableOfflineQueue:false});constapp=express();constrateLimiterRedis=newRateLimiterRedis({storeClient:redisClient,points:300,// Number of pointsduration:60,// Per 60 seconds});constrateLimiterRedisReports=newRateLimiterRedis({keyPrefix:'rlreports',storeClient:redisClient,points:10,// Only 10 points for reports per userduration:60,// Per 60 seconds});// req.userId should be set by someAuthMiddleware. It is up to you, how to do thatapp.use(someAuthMiddleware);constrateLimiterMiddleware=(req,res,next)=>{constkey=req.userId ?req.userId :req.ip;if(req.path.indexOf('/report')===0){constpointsToConsume=req.userId ?1 :5;rateLimiterRedisReports.consume(key,pointsToConsume).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});}else{constpointsToConsume=req.userId ?1 :30;rateLimiterRedis.consume(key,pointsToConsume).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});};}app.use(rateLimiterMiddleware);

Different limiters can be set on per endpoint level as well. It is all up to requirements.

Apply in-memory Block Strategy to avoid extra requests to store

There is no need to increment counter on store, if it is already blocked in current duration. It is also helpful agains DDoS attacks.

constexpress=require('express');constRedis=require('ioredis');constredisClient=newRedis({enableOfflineQueue:false});constapp=express();constrateLimiterRedis=newRateLimiterRedis({storeClient:redisClient,points:300,// Number of pointsduration:60,// Per 60 seconds,inMemoryBlockOnConsumed:300,// If userId or IP consume >=300 points per minute});// req.userId should be set by someAuthMiddleware. It is up to you, how to do thatapp.use(someAuthMiddleware);constrateLimiterMiddleware=(req,res,next)=>{// req.userId should be setconstkey=req.userId ?req.userId :req.ip;constpointsToConsume=req.userId ?1 :30;rateLimiterRedis.consume(key,pointsToConsume).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});};app.use(rateLimiterMiddleware);

UserId is blocked in memory withinMemoryBlockOnConsumed option, when 300 or more points are consumed. Block expires when points are reset in store.

More details on in-memory Block Strategy here

Setup Insurance Strategy for store limiters

There may be many reasons to take care of cases when limits store like Redis is down:

  1. you have just started your project and do not want to spend time on setting up Redis Cluster or any other stable infrastructure just to handle limits more stable.
  2. you do not want to spend more money on setting up 2 or more instances of database.
  3. you need to limit access to an application and you want just sleep well over weekend.

This example demonstrates memory limiter as insurance. Yes, it would work wrong if redis is down and redis limiter has 300 points for all NodeJS processes and then it works in memory with the same 300 points per process not overall. We can level that.

constexpress=require('express');constRedis=require('ioredis');constredisClient=newRedis({enableOfflineQueue:false});constapp=express();constrateLimiterMemory=newRateLimiterMemory({points:60,// 300 / 5 if there are 5 processes at allduration:60,});constrateLimiterRedis=newRateLimiterRedis({storeClient:redisClient,points:300,// Number of pointsduration:60,// Per 60 seconds,inMemoryBlockOnConsumed:301,// If userId or IP consume >=301 points per minuteinMemoryBlockDuration:60,// Block it for a minute in memory, so no requests go to RedisinsuranceLimiter:rateLimiterMemory,});// req.userId should be set by someAuthMiddleware. It is up to you, how to do thatapp.use(someAuthMiddleware);constrateLimiterMiddleware=(req,res,next)=>{// req.userId should be setconstkey=req.userId ?req.userId :req.ip;constpointsToConsume=req.userId ?1 :30;rateLimiterRedis.consume(key,pointsToConsume).then(()=>{next();}).catch(_=>{res.status(429).send('Too Many Requests');});};app.use(rateLimiterMiddleware);

Added insurancerateLimiterMemory is used only when Redis can not process request by some reason. Any limiter from this package can be used as insurance limiter. You can have another Redis up and running for a case if the first is down as well.

More details on Insurance Strategy here

Third-party API, crawler, bot rate limiting

RateLimiterQueue limits number of requests and queues extra requests.

const{RateLimiterMemory, RateLimiterQueue}=require('rate-limiter-flexible');constfetch=require('node-fetch');constlimiterFlexible=newRateLimiterMemory({points:1,duration:2,});constlimiterQueue=newRateLimiterQueue(limiterFlexible,{maxQueueSize:100,});for(leti=0;i<200;i++){limiterQueue.removeTokens(1).then(()=>{fetch('https://github.com/animir/node-rate-limiter-flexible').then(()=>{console.log(Date.now())}).catch(err=>console.error(err))}).catch(()=>{console.log('queue is full')})}

In this example, it makes one request per two seconds.maxQueueSize is set to 100, so if you run this code, you should see something like:

...queue is fullqueue is fullqueue is fullqueue is fullqueue is fullqueue is full1569046899363156904690139115690469034911569046905192...

You can omitmaxQueueSize option to queue as many requests as possible.Read more on RateLimiterQueue

Get started

Middlewares and plugins

Migration from other packages

Limiters:

Wrappers:

Knowledge base:

Clone this wiki locally


[8]ページ先頭

©2009-2025 Movatter.jp