This article'slead sectionmay be too short to adequatelysummarize the key points. Please consider expanding the lead toprovide an accessible overview of all important aspects of the article.(October 2021) |
Bot prevention refers to the methods used by web services to prevent access byautomated processes.
Studies suggest that over half of the traffic on the internet is bot activity, of which over half is further classified as 'bad bots'.[1]
Bots are used for various purposes online. Some bots are used passively forweb scraping purposes, for example, to gather information fromairlines about flight prices and destinations. Other bots, such assneaker bots, help the bot operator acquire high-demand luxury goods; sometimes these areresold on the secondary market at higher prices, in what is commonly known as 'scalping'.[2][3][4]
According to Imperva Bad Bot Report 2025, bad bots now making up 37% of all internet traffic.[5]
Variousfingerprinting and behavioural techniques are used to identify whether theclient is a human user or a bot. In turn, bots use a range of techniques to avoid detection and appear like a human to the server.[2]
Browser fingerprinting techniques are the most common component in anti-bot protection systems. Data is usually collected through client-sideJavaScript which is then transmitted to the anti-bot service for analysis. The data collected includes results from JavaScript APIs (checking if a given API is implemented and returns the results expected from a normal browser), rendering complexWebGL scenes, andusing the Canvas API.[1][6]TLS fingerprinting techniques categorise the client by analysing the supportedcipher suites during theSSL handshake.[7] These fingerprints can be used to createwhitelists/blacklists containing fingerprints of known browser stacks.[8] In 2017,Salesforceopen sourced its TLS fingerprinting library (JA3).[9] Between August and September 2018, Akamai noticed a large increase in TLS tampering across its network to evade detection.[10][8]
Behaviour-based techniques are also utilised, although less commonly than fingerprinting techniques, and rely on the idea that bots behave differently to human visitors. A common behavioural approach is to analyse a client'smouse movements and determine if they are typical of a human.[1][11]
More traditional techniques such asCAPTCHAs are also often employed, however they are generally considered ineffective while simultaneously obtrusive to human visitors.[12]
The use of JavaScript can prevent some bots that rely on basic requests (such as viacURL), as these will not load the detection script and hence will fail to progress.[1] A common method to bypass many techniques is to use aheadless browser to simulate a realweb browser and execute the client-side JavaScript detection scripts.[2][1] There are a variety of headless browsers that are used; some are custom (such asPhantomJS) but it is also possible to operate typical browsers such asGoogle Chrome in headless mode using a driver.Selenium is a common web automation framework that makes it easier to control the headless browser.[6][1] Anti-bot detection systems attempt to identify the implementation of methods specific to these headless browsers, or the lack of proper implementation of APIs that would be implemented in regular web browsers.[1][13]
The source code of these JavaScript files is typicallyobfuscated to make it harder toreverse engineer how the detection works.[6] Common techniques include:[14]
debugger statements, to prevent use ofdebuggers likeDevToolsAnti-bot protection services are offered by various internet companies, such asCloudflare,[15]DataDome[16] andAkamai.[17][18]
In the United States, theBetter Online Tickets Sales Act (commonly known as the BOTS Act) was passed in 2016 to prevent some uses of bots in commerce.[19] A year later, the United Kingdom passed similar regulations in theDigital Economy Act 2017.[20][21] The effectiveness of these measures is disputed.[22]