DDoS attacks are back: How the threat landscape is changing
Free White paper. DDoS attacks are back: How the threat landscape is changing Download
There has been a resurgence in denial of service attacks recently and defending against them is costing the Enterprise a lot of money. Offering certainty of protection at the best price is always difficult but recent advances in Application Delivery Controller technology place it in the perfect position to mitigate these newer, more sophisticated DDoS attacks.
Not new but very costly
Denial of Services (DoS) attacks are not new. They were very popular back in the early part of the 2000’s when they were used by the unscrupulous to hold popular sites to ransom – Pay us or we’ll stop people getting to you and cost you money. The victims were the large, on line trading organisations that relied on the web for transactions and their very existence. Financial services, online gambling sites and payment processing companies were the obvious targets. Visa, Master Card and PayPal being some of the key names who have suffered in the recent past.
Surveys consistently put the good old loss of availability as the biggest cost of any security breach. It is hardly surprising therefore that a DoS attack is the most expensive form of attack for its victims. There are any number of statistics out there that put the cost of an attack from hundreds of thousands of dollars to tens of millions. It is hard to estimate the cost to any particular business and averages hide much of the detail, but when all is said and done the cost is undoubtedly high.
Adding intelligence and on the come back trail
Early DoS attacks consisted of simple tools generating packets from a single source which was then aimed at a single destination with the aim of bombarding a site with lots of network traffic and requests and freeze out legitimate requests for data. This would commonly fill the WAN pipe to a company and genuine users would find it difficult to get a request in or answered. There were several variants but this was basically the extent of the sophistication.
It was soon realised that by adding more clients to an attack it was possible to increase the potency of the attack and the Distributed Denial of Service (DDoS) attack was born. These are more difficult to defend against because of scale but also because it is now more difficult to separate our legitimate users and attackers.
While volume-based attacks still figure today, attacks are moving up the stack to use weaknesses in the protocols to hamper or disable resources of the servers or other intermediate devices in the path to prevent legitimate usage. Attacks like SYN floods, Pings of Death, Smurf DDoS and others are designed to consume the resources and crash their targets.
Operating even higher on the intelligence scale are the application layer attacks. These are designed to attack the vulnerabilities in the operating systems (e.g. Windows, UNIX, Linux, OpenBSD) and applications (etc. Apache, IIS etc.) that host the services themselves. These are often very difficult to detect as they consists of seemingly innocent and legitimate packets from a variety of sources.
Now it possible to be very pointed and targeted with an attack. Combining different types of attacks to create multi-vector attack can bring down an individual machine, a network device, a specific port or service or even an entire network.
This flexibility is part of the reason we are witnessing a renaissance in DDoS attacks witnessed today. According to Akamai’s Q4 2014 State of the Internet report DDoS attacks are:
- Increasing in number – 57% increase yearly in the number of attacks
- Lasting longer – 28% increase in attack duration
- Becoming more complex – 84% increase in multi-vector attacks
The other big factor responsible for increasing attacks are the ease with which it is possible to set up DDoS. It is becoming easy to get access to resources to commit attacks. In the early days an attacker generally had to know what they were doing to create an attack. Now sites will advertise their ability to do this for anyone with an axe to grind – and a wallet to open. Rent-a-bot sites offer 80,000-120,000 hosts for as little as $200 a day. This number of clients in a distributed denial of service attack will be hard to defend against and effective at bringing down most sites without trouble.
Prevention is far better than cure
Although statistic do not exist on how effective paying a ransom was or is at preventing a DDoS attack. It is an interim fix at best and being seen as an easy target will invite future attacks. Indeed, an attack may not be launched for fiscal gain. It might be a political statement for which there is no financial price for prevention.
The question remains therefore, how can these types of attacks, in all their sophistication, best be mitigated?
Several approaches have been tried in the past to tackle the problem. For all that has changed the focus remains the same – exhaust some resource or other in the computing change to bring down a system.
One approach would be to handle a volume attack where there is more volume to play with. Defending a bandwidth attack at the ISP is imminently sensible and should be considered. Defending against a 100 Mbps attack at a 10 Mbps link means you’ve already lost the fight and legitimate traffic will find it very difficult to get through. Applying the right filters in the cloud at a 1 Gbps concentration point is undoubtedly the right thing to do.
The problem with this approach in isolation is that few CDN operators or ISPs will understand the traffic of a particular customer and thus will not necessarily be able to prevent the more intelligent application layer attacks.
Where should these be turned back? These attacks must be handled on-premise for sure, but where? Many would suggest that the perimeter firewall or Intrusion Prevention device might be the place. While many Enterprise devices in this class offer some sort of protection especially at the network level, they frequently lack the application intelligence to protect against the more sophisticated attacks seen today.
Others swear by dedicated DoS mitigation devices. A good option, for sure, but they’re costly and suffer from the same short comings against the volume attacks discussed above. Further, there is likely to be a susceptibility against SSL-based attacks as few devices have the processing power to handle the large volume of encryption and decryption required during attack conditions.
Probably the wisest choice is to use an Application Delivery Controller (ADC) for protection on premise. The fact that most companies already possess an ADC and its strategic position in the network where it will see all traffic of importance puts this device in a unique position to defend against DoS/DDoS attacks. Built as they are with scalability of throughput and connection handling in mind means that there is a degree of mitigation for volume based attacks. Their application intelligence means that they understand what to look for in protocol and application attacks and how to block it. The in-built SSL acceleration can even handle attack vectors hidden sophisticated encryption.
Defence in depth
An ideal overall solution would likely combine cloud based defences to filter out the volume attacks and an ADC device as the most effective protection for application layer attacks on premise. This layered approach provides defence in depth and maximises the return on investment.
ADC devices, like Citrix NetScaler, are placed in the network to handle application delivery and protection and their configuration and design means that they are an effective defence for all sorts of attacks – DoS included.
Whether it is purchased as a security device or an optimisation device the traffic processing is the same and the ability to do both jobs simultaneously does not impact its ability to do either individually.
Free White paper. DDoS attacks are back: How the threat landscape is changing