Maintenance
Logrotate cacti.log
Requirements
By default, cacti uses the file <cacti_dir>/log/cacti.log for logging purpose. There's no automatic cleanup of this file. So, without further intervention, there's a good chance, that this file reaches a file size limit of your filesystem. This will stop any further polling process. For *NIX type systems, logrotate is a widely known utility that solves exactly this problem. The following descriptions assumes you've set up a standard logrotate environment. The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD.
The logrotate Configuration File
The logrotate function is well described in the man pages. My setup is as follows:
# logrotate cacti.log /var/www/html/cacti/log/cacti.log { # keep 7 versions online rotate 7 # rotate each day daily # don't compress, but # if disk space is an issue, change to # compress nocompress # create new file with <mode> <user> <group> attributes create 644 cactiuser cactiuser # add a YYYYMMDD extension instead of a number dateext }
Descriptions are given inline. Copy those statements from above into /etc/logrotate.d/cacti. This is the recommended file for application-specific logrotate files.
Test
logrotate configuration files are tested by running
logrotate -fd /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti.log Handling 1 logs rotating pattern: /var/www/html/cacti/log/cacti.log forced from command line (7 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti.log log needs rotating rotating log /var/www/html/cacti/log/cacti.log, log->rotateCount is 7 glob finding old rotated logs failed renaming /var/www/html/cacti/log/cacti.log to /var/www/html/cacti/log/cacti.log-20071004 creating new log mode = 0644 uid = 502 gid = 502
This is a dry run, no rotation is actually performed. Option -f forces log rotation even if the rotate criterium is not fulfilled. Option -d issues debug output but will suppress any real log rotation. Verify by listing the log directory: nothing has changed at all!
Now we will request log rotation using
logrotate -f /etc/logrotate.d/cacti
No output is produced, but you will see the effect
ls -l /var/www/html/cacti/log -rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 21:35 cacti.log -rw-r--r-- 1 cactiuser cactiuser 228735 4. Okt 21:35 cacti.log-20071004
Of course, the date extension on the file will change accordingly. Please notice, that a new cacti.log file was created. If you issue the command again, nothing will happen:
logrotate -fv /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti.log Handling 1 logs rotating pattern: /var/www/html/cacti/log/cacti.log forced from command line (7 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti.log log needs rotating rotating log /var/www/html/cacti/log/cacti.log, log->rotateCount is 7 destination /var/www/html/cacti/log/cacti.log-20071004 already exists, skipping rotation
If you want to see all those 7 rotations on one single day, remove the dateext directive temporarily from the configuration file.
Daily MySQL Dump of the Cacti SQL Database using logrotate
Requirements
By default, cacti uses the MySQL database named cacti. You may want to consider dumping this database on regular intervals for failsafe reason. For a single dump, you will usually enter this dump command directly into crontab. It is possible, to mis-use logrotate to create daily dumps, append dateext-like timestamps to each dump and keep a distinct number of generations online. For a basic setup, see Logrotate cacti.log, The following descriptions assumes you've set up a standard logrotate environment. The examples are based on a Fedora 6 environment. I assume, that Red Hat type installations will work the same way. I hope, but am not sure, that this howto is easily portable for Debian/Ubuntu and stuff and hopefully even for *BSD.
The logrotate Configuration File for MySQL Dumping the Cacti Database
It is absolutely necessary for this example, that a single dump file already exists. Else, logrotate will skip any execution due to a missing “log” file. My setup is as follows:
# logrotate sql dump file /var/www/html/cacti/log/cacti_dump.sql { # keep 31 generations online rotate 31 # create a daily dump daily # don't compress the dump nocompress # create using this <mode> <user> <group> create 644 cactiuser cactiuser # append a nice date to the file dateext # delete all generations older than 31 days maxage 31 # run this script AFTER rotating the previous cacti_dump.sql file # make sure to use the correct database, user and password, see ./include/config.php prerotate /usr/bin/mysqldump --user=cactiuser --password=cactiuser --lock-tables --add-drop-database --add-drop-table cacti > /var/www/html/cacti/log/cacti_dump.sql endscript }
You may add this configuration to /etc/logrotate.d/cacti, even if the logrotate of cacti.log is already given there. Prior to testing this configuration, don't forget to
touch /var/www/html/cacti/log/cacti_dump.sql
Now run the test as follows
logrotate -fv /etc/logrotate.d/cacti reading config file /etc/logrotate.d/cacti reading config info for /var/www/html/cacti/log/cacti_dump.sql Handling 1 log rotating pattern: /var/www/html/cacti/log/cacti_dump.sql forced from command line (31 rotations) empty log files are rotated, old logs are removed considering log /var/www/html/cacti/log/cacti_dump.sql log needs rotating rotating log /var/www/html/cacti/log/cacti_dump.sql, log->rotateCount is 31 glob finding old rotated logs failed running prerotate script renaming /var/www/html/cacti/log/cacti_dump.sql to /var/www/html/cacti/log/cacti_dump.sql-20071004 creating new log mode = 0644 uid = 502 gid = 502
Now list the results
ls -l /var/www/html/log/cacti_dump* -rw-r--r-- 1 cactiuser cactiuser 0 4. Okt 22:10 cacti_dump.sql -rw-r--r-- 1 cactiuser cactiuser 318441 4. Okt 22:10 cacti_dump.sql-20071004
Migrating RRD Files between Architectures
- Run this Script on source-host.
#/bin/csh # This scripts dumps all rrd in the working dir # into xml files. set files=`echo *.rrd` foreach file ( $files ) set base = `basename $file .rrd` /bin/rrdtool dump $file > /tmp/new1/$base.xml end
- `tar-gzip` the xmls and transfer them to the target system.
- uncompress and untar the file to an empty directory.
- Run this script on target-host in the choosen directory
#/bin/csh # restores rrds from xml-dumps. set files=`echo *.xml` foreach file ($files) set base = `basename $file .xml` /opt/local/bin/rrdtool restore $file /opt/local/htdocs/cacti/rra/$base.rrd end
Migrating RRD Files between Architectures and Hosts
The goal of this part is quite the same as the previous one. The script was created with some more restrictions in mind:
- neither the source nor the target host have enough space for both rrd and xml files
- communication between both hosts is restricted to encrypted communication only (aka ssh, scp)
The script (rrdmigrate.pl) has to be run on the host, were rrd files already exists. It uses some commands that both must be present and available in the $PATH of the user in question. Else you will have to provide /full/path/to/command, but this requires customization of the script, so it was omitted. It accepts some required and some optional parameters:
- -t specifies the target host, where the data has to be copied to
- -i specifies the ssh key file (e.g. ~/.ssh/id_rsa) to avoid multitudes of password entries
- -u specifies the user on the traget system, which must have write access to the target directory (usually: cactiuser)
- -f filemask of the rrd files that shall be treated. Enclose in quotes! e.g. /var/www/html/cacti/rra/*.rrd
optionally
- -d debug level: [0|1|2]
Example
perl rrdmigrate.pl -t target -u cactiuser -i /home/cactiuser/.ssh/id_rsa -f ”/var/www/html/cacti/rra/*.rrd”
This will dump, move and restore all rrd files, one by one, to the target host target using cactiuser and a given ssh key. It will use a minimal space overhead, only for the current rrd file. Do not expect that this is as fast as scp-ing rrd files between servers. So, please test in advance, of there's a real need for rrdtool dump and restore!
#!/usr/bin/perl # ----------------------------------------------------------------------------- $NAME_ = basename($0); $PURPOSE_ = "resize an existing rrd"; $SYNOPSIS_ = "$NAME_ -f <filemask> -t <target host> -u <cactiuser> -i <ssh key> [-d <debug>]"; $REQUIRES_ = "Getopt::Std, File::Basename, File::stat, File::KGlob"; $VERSION_ = "Version 0.1"; $DATE_ = "2010-12-23"; $AUTHOR_ = "Reinhard Scheck"; # ----------------------------------------------------------------------------- # This program is distributed under the terms of the GNU General Public License # --- required Modules -------------------------------------------------------- use Getopt::Std; use File::Basename; # --- initialization ---------------------------------------------------------- my $debug = 0; # minimal output # --- usage ------------------------------------------------------------------- sub usage { print STDOUT "$NAME_ $VERSION_ - $PURPOSE_ Usage: $SYNOPSIS_ Requires: $REQUIRES_ Author: $AUTHOR_ Date: $DATE_ Options: -f, filemask of the source rrds, enclose in tics! -t, target host -u, cactiuser -i, ssh key filename used for scp, ssh operations -d, debug level (0=standard, 1=function trace, 2=verbose) -h, usage and options (this help) No parameter validation done. Hope you know what you're going to do!\n\n"; exit 1; } # --- write_log --------------------------------------------------------------- sub write_log { my $_level = $_[0]; my $_text = $_[1]; if ( $debug >= $_level ) { print $_text; } return 0; } # --- run_cmd --------------------------------------------------------------- sub run_cmd { my $_cmd = $_[0]; my $_lvl = $_[1]; defined($i_[2]) ? (my $_pre = $_[2] . " ") : (my $_pre = '' ); &write_log($_lvl, $_pre . $_cmd . "\n"); system($_cmd); } # --- main -------------------------------------------------------------------- # --- assign input parameters ------------------------------------------------- getopts('ht:d:f:i:u:'); &usage() if $opt_h; defined($opt_d) ? ($debug = $opt_d ) : ($debug = 0 ); defined($opt_i) ? ($key = "-i $opt_i") : ($key = '' ); # --- check for dependent parms ----------------------------------------------- if ( !defined($opt_f) ) { &write_log(0, "Option -f missing\n\n"); &usage; } else { $filemask = $opt_f; }; if ( !defined($opt_t) ) { &write_log(0, "Option -t missing\n\n"); &usage; } else { $host = $opt_t; }; if ( !defined($opt_u) ) { &write_log(0, "Option -u missing\n\n"); &usage; } else { $user = $opt_u; }; # --- suffixes ---------------------------------------------------------------- my $_gzip_ext = ".gz"; my $_xml_ext = ".xml"; # --- loop for all files of given filemask ------------------------------------ my @files = glob($filemask); for my $file ( @files ) { my ($fname, $path, $ext) = fileparse($file, '\..*'); my $_xml_file = $path . $fname . $_xml_ext; my $_gzip_file = $_xml_file . $_gzip_ext; &run_cmd("rrdtool dump $file > $_xml_file", 0); &run_cmd("gzip -fq " . $_xml_file, 1); &run_cmd("scp -q $key $_gzip_file $user\@$host:$_gzip_file", 1); &run_cmd("ssh -q $key $host \"gunzip $_gzip_file;rm -f $file;rrdtool restore $_xml_file $file;rm -f $_xml_file\"", 1); &run_cmd("rm -f $_xml_file", 1); &write_log(0, " ... done.\n"); }