Load balancing your firewalls
Spreading the work without making more work for yourself.
In our mini-series on content networking, we've already discussed the advantages you can gain in terms of performance and resilience if you load balance your web servers using content switches (see Content Switching and Layer 7 Load Balancing). It stands to reason that this could be used to benefit firewall performance too, but there are a couple of things you have to watch out for here by virtue of the way that firewalls operate.
Apart from the obvious advantages in providing for redundancy and extra throughput, there's another gain to be had from being able to direct traffic to specific firewalls. If you can make the decision based on the service being accessed - HTTP, FTP, H.323 etc - you could conceivably set up firewall banks, each of which is configured to handle only that traffic type. Now this may well be overkill for smaller organisations, but if you have multiple firewalls, it gives you the option to configure each very tightly to pass only specific traffic types, rather than have all firewalls with a massive policy covering every type of data that needs to be allowed into your network.
Nearly all firewalls today are stateful, ie. they maintain a record of the state of every session that is allowed to pass through them. To ensure that traffic passing through them is part of a bona fide transaction, they need to see the traffic flows in both directions. This means that you can't simply pop a load balancing content switch in front of them and let it distribute flows through the firewall it thinks is best at the time.
Now, to provide redundancy, most firewall vendors do offer a failover configuration, with two firewalls, either one active and one in standby, or both active, with session information synchronised between both. The decision on which to use is either proprietary or uses the likes of VRRP, but all this gives you is a level of resilience. You won't get extra performance, and there's no way to intelligently load balance the traffic.
In its basic concept, adding content switches to the topology is similar to how you would deploy them if you were using them to balance web traffic to your servers. However, there is one significant difference: you'll probably need twice as many.
Most content switches are designed so that when using them to balance firewalls, you put one (or two, for resilience) on either side of the firewall - the 'clean' side and 'dirty' side, or the inside and outside, depending on your terminology. This is necessary to allow the state to be maintained.
In order for a stateful firewall to work, it needs to see the traffic flow in both directions, so you need content switches at the ingress to the firewall on both sides to make sure return traffic is routed back through the correct path. Typically, the content switch will use a hashing algorithm involving source and destination IP addresses to choose the best real server to send that traffic to. In this instance, though, the 'real server' is actually an interface on the opposing content switch on the other side of the firewalls.
The switch then has to know which path to take to send traffic to that address - this is how you set the switch to route traffic through specific firewalls. If you have two firewalls with one content switch on either side, for instance, the switches have a choice of two paths. You have to configure these paths on each content switch, telling it typically the interface to use and next hop for each end-point address.
The number of entries that you have to configure here will ramp up as you add more firewalls and content switches. A pair of firewalls with two switches either side will require each switch to have four entries (since each of the two opposing switches has two links, one to each firewall). Add another firewall, and you'll have to add more entries to each content switch.
Different vendors offer different ways of doing this, and also different extra features. Ideally, you want a content switch that can carry out health checking of each path, so it can determine if a firewall or other switch fails. You also need to find out how the content switches themselves support their own redundancy.
If you have a DMZ for extranet services, you won't have just two interfaces on your firewalls. If you don't want to fork out for another pair of content switches to connect to those interfaces, you'll need switches that support multiple VLANs and provide extra ports.
NAT is also an area to watch out for. Since the content switch's initial decisions are made based on IP addresses, changing these on the firewall may break the model and prevent traffic being routed back the way it came. Different vendors handle this in different ways, so it's important you find out how this can be supported.
Using content switches to load balance firewalls does offer greater performance and flexibility, but it can be more complex to set up and manage too. Make sure you know exactly what you need it to do so you can be sure the equipment you're considering supports all your requirements.