Did you know that only 48% of current internet traffic is human?
That’s right, more than half of internet traffic is currently generated by robots, divided between good and bad bots.
In the link above, the first of these blog post series on bot management, we discussed the types of bots (good and bad) and the characteristics of the most common forms of attacks.
Perhaps you have a friend who has already had a profile on social media stolen by another user. Or it could be that your credit card was once cloned when making an online purchase. Those are examples of attacks on individuals carried out by bots.
The damage when a bot attacks your company’s website is even greater, since both your data and your customers’ information are at risk. Not to mention that some consumers may come to believe that your business is not reliable and secure enough, resulting in lost sales, contract termination, and other losses in reputation and revenue.
One kind of bot attack that causes considerable damage to e-commerce storefronts has been the denial of inventory, which consists of depleting the stock of goods or services by overloading shopping carts with no intent to finish the transaction.
In this post, we’ll discuss some of the finer details of bad bots, their differences and how they act to harm your website or application.
Generations of Bad Bots and Their Threats
As we mentioned, more than half of internet traffic today is not human, but generated by robots, some of which are good (such as virtual assistants, chatbots, indexers) and others that are considered bad. Bad bots represent approximately 26% of internet traffic and disrupt services, steal data, perform fraudulent and other illegal activities, and can attack APIs underlying websites and mobile apps.
In recent years, bots have evolved from scripting tools to advanced techniques that mimic human browsing behavior, simulating real users to defraud security systems.
Below are the four generations of bad bots and their characteristics:
- 1st generation (scripts) - various requests are sent to websites using just a few different IP addresses. Threats: scraping, carding and form spam.
- 2nd generation (simulates browsers) - these bots operate through website testing and development tools known as “headless” browsers, or versions of Chrome and Firefox that allow operation in headless mode. The main threats are: DDoS attacks, scraping, form spam, skewed analytics and ad fraud.
- 3rd generation (mimicking human behavior) - they simulate human interactions such as mouse and keyboard clicks. They are used for: account takeover, application DDoS, API abuse, carding and ad fraud, among others.
- 4th generation (distributed attacks simulating human behavior) - this generation of bots are the hardest to detect due to their advanced human interaction characteristics, ability to change their user agents, and switch between thousands of IP addresses. Their attacks include: account takeover, application DDoS, API abuse, carding and ad fraud.
Management as a Mitigation Tool
A reasonable person might believe that simply blocking all bad bots could be the right answer to their bot problems. However, it is not that simple.
Firstly, it is not so simple to block all bad bots because the techniques bad bots designers use are sophisticated. Secondly, it is risky, because you could also block good traffic accidentally.
In fact, the best solution so far is bot management.
Below are five management techniques that can help keep bad bots at bay:
- fake data: feed the active bot with modified content manipulating it in its attack attempts;
- visible CAPTCHA: this should be used carefully, as some more sophisticated robots can solve some challenges, however, it can work in certain situations;
- throttling: when a bot strikes persistently, a throttling approach can be effective, but it can still block access from legitimate sources (false positive);
- invisible challenge: this can involve the expectation of moving the mouse or typing data into form fields, actions that a bot would not be able to complete;
- source block: blocking bots source IPs may seem reasonable and cost-effective, but a persistent attack source that updates its bot code frequently may find this mitigation easy to identify and overcome.
But in order to fully mitigate a wide range of ever-evolving bad bot tactics, we advise our customers to adopt a bot management solution, integrated seamlessly into our platform using serverless functions. Today, Azion is proud to offer Radware Bot Manager, a proven leader in the fight against bad bots.
Through a partnership between Azion and the Radware Bot Manager, you can build applications on our platform and add advanced bot management features in minutes, saving time and ensuring website and mobile apps are protected from automated threats.
In our next blog post from the series on bot management, we are going to detail this partnership between Azion and Radware and show you how to use our solutions together to mitigate bad bots on your company’s website once and for all.