The second of the two-part Scalable Linux Clusters article. This second part features the addition of Heartbeat and ldirectord to the network started in Part I.
Figure 1: LVS Direct Routing
Part I of this article walked through the setup of a Linux Virtual Server (LVS) cluster that matches the diagram in Figure 1. This cluster used IPVS direct routing as the method of directing requests from the load balancer to the destination node. While this provides an easy way to add and remove nodes from the cluster and to scale the service, this solution is no better than DNS round-robin in terms of high availability.
Most of this article concerns preparation and configuration of the Heartbeat setup. If you had followed part one of the article and have moved your service over to a manually-configured load balancer, your service will not be affected until the final section of the article when the Heartbeat service is started. This final section will have a recommended grouping of steps that will minimize any downtime to a negligible amount.
Figure 2: Load Balancer Redundancy
Figure 2 shows the addition of a failover load balancer to the previous network setup. The dotted line coming from the Internet indicates that this load balancer would receive traffic from the Internet in the event that the primary load balancer is unavailable.
The additional load balancer needs to be added to the same subnet as the rest of the machines from part one. Then, install Heartbeat on both load balancers. Heartbeat should be available in your distribution’s package repository. If not, you can install it from source.
There are two distinct versions of Heartbeat, version 1 and version 2, and two corresponding ways to configure Heartbeat resources. One is the legacy approach from Heartbeat version 1, which uses the
haresources file. While this approach is still supported by version 2, a new XML configuration format was introduced for the new Cluster Resource Manager (CRM). Because the CRM system is much more complicated than
haresources, and its features are not relevant to a network like the one in Figure 2, this article will continue using the version 1 configuration. You should install install version 2 regardless, in case you end up needing the CRM system in the future.
There are three files that must be edited:
haresources. Usually these are located in
ha.cf file needs a minimum of three configuration options: a method to communicate with other nodes, an option whether to automatically move a resource back to the primary host after an outage, and a list of Heartbeat nodes.
bcast eth1 auto_failback on node load1 load2
While these three are sufficient, usually there are more configuration options you should set, such as where to write logs. Your distribution should have a default configuration file present. Make changes for the three necessary values, and confirm that the rest of the file is filled out reasonably (the default file is hopefully commented). Make the same changes on both load balancers, unless you use a unicast connection for communication. In this case, each host will be set to communicate with the real IP address of the other load balancer in its configuration. If you can, it is beneficial to hook the load balancers together on a crossover cable connection, in which case the broadcast method works well and requires minimal configuration effort. Next best is to put the machines on a backend network, in which case unicast would be better as it avoids excess broadcast traffic.
The next file is
authkeys, which provides a key to secure communication between the load balancers. The content of this file is simple, and needs to be the same on both load balancers. Replace “long_random_password” with a random string to use as the key. Also, set the permissions on the file to prevent reading by any users except root.
auth 1 1 sha1 long_random_password
The third file for your Heartbeat setup is
haresources. This file only needs one line: the VIP address you are using to access the service on.
load1 22.214.171.124/32/eth0 ldirectord
“load1” is the hostname of the primary host for this resource. This file must be the same on both load balancers; do not use the hostname of the second load balancer in this file on either host. Replace the IP address above with your VIP, and leave the netmask of 32 unchanged---this will keep this address from being used to route outbound traffic. Also, change “eth0” to the public interface you are using. The last portion of the line, “ldirectord,” specifies the service to start and stop along with bringing the interface up. Rather than strictly being a service, as if it were starting or stopping a web server, this resource will instead start a tool called ldirectord for managing the IPVS configuration.
Heartbeat should now be configured correctly. However, do not start the service yet. If you followed part one of this article and already have the VIP address set up on the load balancer, Heartbeat would not be able to control it as a resource. Furthermore, without ldirectord configured yet, the destination nodes would not have any traffic routed to them.
The ldirectord daemon manages the setup of IPVS on the load balancer. It checks the availability of the services provided by the destination nodes and removes them from the load balancer if they become unavailable. This prevents the load balancer from passing requests on to nodes that are malfunctioning.
ldirectord is written in Perl, so you will have to have a working Perl system (most Unix systems do). It will require extra Perl modules dependent on which services you will be validating. ldirectord may be available in your package repository as a standalone package (perhaps called “heartbeat-ldirectord”) or as an option for the heartbeat package. The ldirectord website also describes how to download a snapshot of the code repository for a source installation.
Double-check that there is a reference to ldirectord in the directory
/etc/ha.d/resource.d. If you installed a package, it should have created it there. Otherwise, create a symlink
ldirectord in that directory to wherever the ldirectord daemon resides (like
At this point, you may want to set up the load balancers to provide a fallback service in case all the nodes are down. For HTTP services, a simple fallback service would be a web server on each load balancer with no web pages and a custom 404 page that tells the user your web cluster is unavailable. A fallback service does not replace your actual service, but it is good to have regardless as most users will prefer a notice that something is wrong to a timeout.
The configuration for ldirectord includes a list of services to be load balanced. For each service, the destination nodes are specified, as well as the method to determine whether each node is functioning or not. This article will look at a simple configuration for HTTP load balancing; further service-checking options can be found in the ldirectord documentation.
logfile = "local0" checkinterval = 5 autoreload = yes virtual = 126.96.36.199:80 real = 188.8.131.52:80 gate 5 real = 184.108.40.206:80 gate 5 fallback = 127.0.0.1:80 gate scheduler = wrr protocol = tcp checktype = 4 request = "/server-status" receive = "Apache Server Status" negotiatetimeout = 20
ldirectord.cf file in
/etc/ha.d/conf/. Your distribution may already have a sample file in one of these directories—if not you can choose either location for the file. The line starting with the keyword “virtual” should have the VIP address of your service, along with the port your service runs on. The next two lines define the destination nodes. Use the RIP addresses of the nodes, these are the “real” servers. The “gate” keyword tells ldirectord to set up these servers in IPVS using direct routing, following the example started in part one of the article. The service will use weighted-round-robin (“wrr”) as the algorithm to balance traffic, and both nodes are given equal weights of five.
Aside from the declaration of the service and the nodes that provide it, the check mechanism is specified. In this case “checktype” is set to 4. The numbered checktypes are actually a hybrid between the “negotiate” and “connect” checktypes. The service will be checked every five seconds (the “checkinterval”) and, in this case, every fourth check will be “negotiate.” The intermediate checks will be of type “connect.” The interval between negotiate checks, four, corresponds to the number “4” we used for the checktype.
The negotiate check will make an HTTP connection to the service, while the connect checks just open and close a TCP connection. The HTTP connection here is configured to send the request URL “/server-status” and expect to receive back a page that contains the text ”Apache Server Status.” If any of these checks fail, the node is removed from the pool of available servers. Lastly, the “negotiatetimeout” setting sets how long to wait for the HTTP response. Normally, you would not want to set this higher than the “checkinterval” (if you did and the service was near to the timeout limit, you would have multiple requests out at a time). However, the fact that only every fourth check will have this timeout allows “negotiatetimeout” to equal the product of “checkinterval” and “checktype.”
If you encounter the situation where ldirectord only sees your service as being down when it appears up to you, make sure you can connect directly from the load balancer to the service on the RIP address. Also, if you are using HTTP to check the contents of one of your web pages, you may be hitting a problem with virtual hosts. Specifying the “virtualhost” parameter under the virtual service in
ldirectord.cf will allow you to choose the correct virtual host.
Turning it on
Before turning on Heartbeat, you will have to undo certain steps from the previous article. Running all the following steps together will provide the least downtime (if all goes well, it should be unnoticeable). First, the manually-configured IPVS service must be removed. This command clears any IPVS configuration:
Next, the IP address for the VIP on the load balancer has to be brought down. Run the following command, substituting the VIP address for the keyword
ip addr del VIP dev eth0
or, if using ifconfig:
ifconfig eth0:1 down
Without the manually-configured resources in the way, we can start Heartbeat. On most systems, this looks something like:
If you were reading ahead before doing any of the three step, run the commands now. The second load balancer should not have any of the manual configurations, so just run the last command to start Heartbeat. Also, do not forget to add this service to a runlevel on both load balancers so that Heartbeat will start again after a reboot.
There are several layers involved if you need to debug because something goes wrong. First, you can always check the log files to get an idea of where the error may be occurring. To check whether Heartbeat has added the VIP, run
ip addr list or
ifconfig on the primary load balancer. Also, run
ipvsadm without any arguments to see whether or not the nodes have been added to IPVS. If the service shows up, but no nodes, then there was likely a problem with the ldirectord availability checks. If the service and nodes show up, then there is a routing problem to diagnose.
The final test for your high-availability service is to test failover. Bring one of your destination nodes down. The node’s weight will be set to zero in the IPVS table, meaning no requests will be passed to it. Then, try shutting down heartbeat on the primary load balancer. The IP address should be removed from that load balancer, and added to the backup load balancer. In addition, ldirectord should add the entries for the service and destination nodes to IPVS.
The next time you have an unforeseen problem with any node or even with the load balancer, you can be confident that your infrastructure is ready to work around the problem to continue providing an available service.
- 32 is a subnet mask in Classless Inter-Domain Routing (CIDR) notation. It corresponds to the dotted-decimal subnet mask 255.255.255.255, which is represented in binary by a string of 32 ones.
- The service to be used is auto-determined from the port number if possible.
- For more reading on virtual hosts, see the Apache Virtual Host documentation at http://httpd.apache.org/docs/2.2/vhosts/.
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.