We have some new F5 load balancers in our environment, which means I need a method of grabbing regular configuration backups. There are a number of methods out there, but I’ve opted to use SolarWind’s CatTools software since we already own it.
The config I used is based on this blog post. It’s a great write-up on how to back up F5 configurations using CatTools. I don’t want to replicate what was written over there – but I did hit some issues that were specific to my use-case that I wanted to share.
While I was happy to find the article linked above, the immediate results didn’t work so smooth for me. This may be due to some key configuration differences that I face in my network:
- All the F5’s are LDAP integrated – so there isn’t an easy way to provide LDAP users with direct bash access
- All of my F5’s are remote appliances, where the backup configuration is being copied across the WAN
Getting around the first problem was my biggest challenge. CatTools is a very command/response-oriented application. Any remote LDAP authenticated users are immediately dropped into F5’s shell: tmsh. To get from there to their ‘advanced shell’ is as simple as typing bash. However, When the terminal prompt changes, it often throws CatTools into a state of “I didn’t receive the response prompt I expected, therefore kill the job – something went wrong”. I spent a bit more time on this than I wanted to – but the underlying problem was that the “F5.BigIP” device type was specifically looking for the tmsh shell and couldn’t handle the prompt change. The fix? Switch the device type to “Linux.RedHat.Bash“, then add the bash command to the first line of your backup script.
The next problem was using TFTP to copy the backup archives over the WAN. Even some of the new F5’s with minimal configuration still generate a 10Mb file. Doesn’t seem like much, but when you’re copying that over a WAN between two datacenters, that turns into a ~5 minute file transfer. CatTools by default will only wait 30 seconds after executing a command before it expects a response. So every time I tried to run the job, CatTools would kill it only 30-seconds into the file transfer. Luckily enough, they support a utility command that can alter the normal timeout:
%ctUM: Timeout 600
tftp -m binary 192.168.1.10 -c put $filename
%ctUM: Timeout 0
The command %ctUM: Timeout 600 changes the timeout value to 600 seconds, or 10 minutes. The TFTP file transfer command is next, which is now permitted up to 10 minutes to finish. The last command resets the timeout back to the default (30 seconds).
I also realized that the original script doesn’t purge the backup archive afterwards. For my use case, I would much rather automatically clean up the backup files once they’ve been transferred to a central location.
So after all that, here is the version of that script that I’m using:
bash
export date=`date +"%y%m%d"`
export filename=$HOSTNAME.$date.ucs
tmsh save /sys ucs /var/local/ucs/$filename
cd /var/local/ucs
%ctUM: Timeout 600
tftp -m binary 192.168.1.10 -c put $filename
%ctUM: Timeout 0
rm -f $filename
Thanks again to the original blog post for getting me on the right track with this! I hope that my ramblings here are helpful to anyone with a similar deployment scenario.